Skip to content

Conversation

@acoliver
Copy link
Collaborator

Summary\n- parse Responses API reasoning SSE events into ThinkingBlocks\n- buffer reasoning deltas and emit before usage metadata\n- add unit coverage for reasoning-only, interleaved, and edge cases\n\n## Testing\n- npm run format\n- npm run lint\n- npm run typecheck\n- npm run test\n- npm run build\n- node scripts/start.js --profile-load synthetic --prompt "write me a haiku"\n\nfixes #922

@github-actions github-actions bot added the maintainer:e2e:ok Trusted contributor; maintainer-approved E2E run label Jan 16, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 16, 2026

Warning

Rate limit exceeded

@acoliver has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 8 minutes and 53 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 8d3401a and c6112a7.

📒 Files selected for processing (1)
  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts

Walkthrough

Adds Responses API reasoning support: SSE parsing of reasoning deltas/summaries into ThinkingBlock(s) (preserving encrypted_content), request wiring for reasoning and text.verbosity, CLI buffering/UI visibility changes, multiple tests validating parsing, request payloads, and streaming behavior.

Changes

Cohort / File(s) Summary
SSE parsing & tests
packages/core/src/providers/openai/parseResponsesStream.ts, packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts
Parse Responses SSE reasoning_text/summary deltas and done events; add appendReasoningDelta, ParseResponsesStreamOptions (includeThinkingInResponse) and logging; accumulate/dedupe reasoning, emit ThinkingBlock(s) (preserve encrypted_content); tests simulate SSE chunks and validate emitted IContent ordering/metadata.
Responses provider & tests
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts, packages/core/src/providers/openai-responses/__tests__/*reasoning*.test.ts, packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.codex.malformedCallId.test.ts
Inject reasoning fields into /responses requests (reasoning.effort, reasoning.summary), add include reasoning.encrypted_content when appropriate, map ThinkingBlock encrypted_content into inputs, add text.verbosity support, Codex synthetic read helpers and request adjustments; tests assert request bodies and stream handling.
Content model & validation
packages/core/src/services/history/IContent.ts
Added encryptedContent?: string to ThinkingBlock; updated validation to accept encrypted-only thinking items and refined thinking validation logic.
Request param sanitization
packages/core/src/providers/openai/openaiRequestParams.ts
Added verbosity to internal keys and strip summary: 'none' in sanitization to avoid sending summary: "none" to the API.
CLI settings, registry & aliases
packages/cli/src/ui/commands/setCommand.ts, packages/cli/src/settings/ephemeralSettings.ts, packages/core/src/settings/settingsRegistry.ts, packages/cli/src/providers/aliases/codex.config, packages/cli/src/settings/ephemeralSettings.*.test.ts
Add ephemeral settings reasoning.summary and text.verbosity (help/validation), isValidEphemeralSetting helper, registry entries for new settings and codex alias defaults; tests for validation/help.
CLI streaming, buffering & UI
packages/cli/src/nonInteractiveCli.ts, packages/cli/src/nonInteractiveCli.test.ts, packages/cli/src/ui/components/messages/GeminiMessage.tsx, packages/cli/src/ui/hooks/useGeminiStream.ts, packages/cli/src/ui/hooks/useGeminiStream.thinking.test.tsx
Introduce thought buffering/coalescing in non-interactive CLI (flush before content/tool calls), suppress thinking for pending items in GeminiMessage, expose pending thinking during streaming, prevent duplicate thoughts and clear refs on commit; tests updated/added.
Runtime/profile keys & tests
packages/cli/src/runtime/runtimeSettings.ts, packages/cli/src/runtime/runtimeSettings.reasoningSummary.test.ts, packages/cli/src/providers/providerAliases.codex.reasoningSummary.test.ts
Export PROFILE_EPHEMERAL_KEYS and add tests ensuring reasoning.* and text.verbosity are included in profile-persistable keys and codex alias defaults.
Misc tests & small edits
packages/cli/src/nonInteractiveCli.test.ts, various new/updated tests across CLI/core
Many new/updated tests covering SSE parsing, provider request payloads, settings validation, and UI streaming/behavior changes.

Sequence Diagram

sequenceDiagram
    participant Client
    participant Provider as OpenAIResponsesProvider
    participant ResponsesAPI as OpenAI Responses API
    participant Parser as parseResponsesStream
    participant History as Content/History
    participant UI

    Client->>Provider: generateChatCompletion(with reasoning.*, text.verbosity)
    Provider->>ResponsesAPI: POST /responses (includes reasoning, include, verbosity)
    ResponsesAPI-->>Parser: SSE events (reasoning_text.delta, reasoning_summary_text.delta, output_text.delta, output_item.*)
    Parser->>Parser: accumulate reasoning deltas, dedupe, emit ThinkingBlock(s)
    Parser->>Provider: yield IContent items (ThinkingBlock, TextBlock, ToolCall, usage)
    Provider->>History: persist content (preserve encrypted_content)
    UI->>History: read item (checks reasoning.includeInResponse)
    alt includeInResponse = true
        UI->>UI: render ThinkingBlock(s) + Text
    else
        UI->>UI: render Text only (thinking stored/hidden)
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 I nibbled SSE crumbs, stitched deltas neat and small,
I hid encrypted carrots safe, then buffered them all.
Thoughts coalesce in my pouch, deduped, tidy, and bright,
Now history hears thinking — and the UI shows the light.
Hop, nibble, rejoice — reasoning streams take flight!

🚥 Pre-merge checks | ✅ 4
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'fix(openai-responses): emit reasoning blocks from responses stream' accurately and concisely describes the main change—extending parseResponsesStream to emit reasoning/thinking as ThinkingBlocks.
Description check ✅ Passed The PR description includes TLDR with objectives and references issue #922, matching the template's Summary and Linked issues sections. Testing commands are provided, though Testing Matrix is not filled.
Linked Issues check ✅ Passed All major objectives from issue #922 are met: parseResponsesStream now emits ThinkingBlocks for reasoning events, buffering logic is in place, comprehensive unit tests are added, and thinking blocks integrate with the CLI history/UI rendering pipeline.
Out of Scope Changes check ✅ Passed All changes are directly related to adding reasoning/thinking block support to the Responses provider stream parsing. No unrelated refactoring or feature creep detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch issue922

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Contributor

github-actions bot commented Jan 16, 2026

LLxprt PR Review – PR #1156

Title: fix(openai-responses): emit reasoning blocks from responses stream
Author: acoliver
Linked Issue: #922 (OpenAI Responses streaming drops reasoning/thinking)


Issue Alignment

Evidence: The PR directly addresses issue #922's root cause.

  1. Request side: OpenAIResponsesProvider.ts now adds include: ['reasoning.encrypted_content'] to requests when reasoning.enabled or reasoning.effort is set (lines ~860-870). Without this, the API returns no reasoning events.

  2. Parsing side: parseResponsesStream.ts now handles SSE event types:

    • response.reasoning_text.delta/done (lines ~179-231)
    • response.reasoning_summary_text.delta/done (lines ~233-270)
    • response.output_item.done with item.type === 'reasoning' (lines ~302-377)
  3. Output: Emits ThinkingBlock with both thought (readable summary) and encryptedContent (base64 for round-trip) fields.

Verdict: [OK] Aligned


Side Effects

  • IContent.ts: ThinkingBlock interface extended with encryptedContent?: string field; validation updated to accept blocks with either thought OR encryptedContent.
  • Settings: Added text.verbosity support for Responses API.
  • Deduplication: emittedThoughts Map prevents duplicate thinking blocks when multiple event types carry the same reasoning.

Verdict: [OK] Contained


Code Quality

Strengths:

  • Proper deduplication logic prevents pyramid-style duplicates across event types
  • Spacing logic (appendReasoningDelta) handles word-boundary concatenation correctly
  • includeThinkingInResponse option respects user preference to suppress thinking blocks in output while preserving encrypted content
  • Clean separation between request building and stream parsing

Observations:

  • Debug logging added via DebugLogger (appropriate for debugging reasoning issues)
  • No obvious race conditions; async generator pattern is sound

Verdict: [OK] Sound


Tests and Coverage

Test files added (1688 lines total):

  • OpenAIResponsesProvider.reasoningInclude.test.ts (661 lines) - request building, SSE parsing, context round-trip
  • OpenAIResponsesProvider.reasoningSummary.test.ts (367 lines) - summary handling
  • OpenAIResponsesProvider.textVerbosity.test.ts (349 lines) - verbosity settings
  • parseResponsesStream.reasoning.test.ts (311 lines) - delta accumulation, interleaved streams, deduplication

Coverage impact: [OK] Increase

Tests cover:

  • Reasoning-only streams
  • Interleaved reasoning + text + tool calls
  • Delta accumulation and spacing
  • Deduplication when output_item.done follows deltas
  • Visibility control via includeThinkingInResponse
  • Context round-trip with encrypted content
  • Edge cases: empty reasoning, usage metadata

Verdict: [OK] Substantial coverage


Verdict

Ready

The PR correctly implements reasoning/thinking block support for OpenAI Responses API by:

  1. Requesting reasoning content via include parameter
  2. Parsing all relevant SSE event types
  3. Emitting ThinkingBlock with proper deduplication
  4. Providing comprehensive test coverage for reasoning-only, interleaved, and edge cases

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@packages/core/src/providers/openai/parseResponsesStream.ts`:
- Line 58: The code currently only handles "response.reasoning_text.delta" and
"response.reasoning_text.done" but misses
"response.reasoning_summary_text.delta"; update the event handling inside
parseResponsesStream (where reasoningText is declared and where events 95-117
are processed) to treat "response.reasoning_summary_text.delta" the same as
"response.reasoning_text.delta" by appending its payload to the existing
reasoningText buffer, and ensure any corresponding "done" handling merges or
finalizes reasoningText as done; reference the reasoningText variable and the
response event switch/if blocks to add the new branch for
"response.reasoning_summary_text.delta" so all reasoning variants are captured.
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 86cb6aa and 9d194bb.

📒 Files selected for processing (2)
  • packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2025-12-18T14:06:22.557Z
Learning: OpenAIResponsesProvider (packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts) currently bypasses the ephemeral truncation system by using direct `JSON.stringify(toolResponseBlock.result)` and needs to be updated to support ephemeral settings like the other providers.
📚 Learning: 2025-12-18T14:06:22.557Z
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2025-12-18T14:06:22.557Z
Learning: OpenAIResponsesProvider (packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts) currently bypasses the ephemeral truncation system by using direct `JSON.stringify(toolResponseBlock.result)` and needs to be updated to support ephemeral settings like the other providers.

Applied to files:

  • packages/core/src/providers/openai/parseResponsesStream.ts
  • packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts
📚 Learning: 2025-11-16T22:51:26.374Z
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.

Applied to files:

  • packages/core/src/providers/openai/parseResponsesStream.ts
  • packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts
🧬 Code graph analysis (1)
packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts (1)
packages/core/src/providers/openai/parseResponsesStream.ts (1)
  • parseResponsesStream (51-237)
⏰ Context from checks skipped due to timeout of 270000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Test (ubuntu-latest, 24.x)
  • GitHub Check: Test (macos-latest, 24.x)
  • GitHub Check: E2E Test (Linux) - sandbox:none
  • GitHub Check: E2E Test (Linux) - sandbox:docker
  • GitHub Check: E2E Test (macOS)
  • GitHub Check: Slow E2E - Win
🔇 Additional comments (4)
packages/core/src/providers/openai/parseResponsesStream.ts (2)

2-4: Doc update matches new behavior.

Clear summary of the added reasoning/thinking handling.


187-202: Reasoning flush before usage looks correct.

Emitting the thinking block prior to usage metadata aligns with the stated ordering requirement.

packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts (2)

4-17: SSE stream helper is clean and deterministic.

The helper makes chunked SSE tests readable and reliable.


20-196: Test coverage is thorough.

Covers reasoning-only, interleaving with text/tool calls, whitespace suppression, accumulation, usage metadata, and ordering.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

@github-actions
Copy link
Contributor

github-actions bot commented Jan 16, 2026

Code Coverage Summary

Package Lines Statements Functions Branches
CLI 49.26% 49.26% 56.75% 77.05%
Core 71.16% 71.16% 73.28% 78.92%
CLI Package - Full Text Report
-------------------|---------|----------|---------|---------|-------------------
File               | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s 
-------------------|---------|----------|---------|---------|-------------------
All files          |   49.26 |    77.05 |   56.75 |   49.26 |                   
 src               |   34.81 |    56.52 |   52.38 |   34.81 |                   
  gemini.tsx       |   14.47 |    57.14 |   28.57 |   14.47 | ...,334-1135,1143 
  ...ractiveCli.ts |   63.18 |    62.36 |      50 |   63.18 | ...87-503,521-528 
  ...liCommands.ts |   97.22 |       60 |     100 |   97.22 | 39-40             
  ...ActiveAuth.ts |      36 |    35.71 |      80 |      36 | ...64-169,186-195 
 src/auth          |   52.56 |    64.25 |   67.56 |   52.56 |                   
  ...andlerImpl.ts |   90.72 |    84.61 |   71.42 |   90.72 | ...48-149,155-159 
  ...henticator.ts |     100 |    95.23 |   83.33 |     100 | 170               
  ...ketManager.ts |     100 |      100 |     100 |     100 |                   
  ...h-provider.ts |   56.82 |     53.7 |   66.66 |   56.82 | ...71-605,613-636 
  ...h-provider.ts |   40.74 |    85.71 |   69.23 |   40.74 | ...72-485,489-531 
  ...h-provider.ts |   17.74 |       90 |   27.77 |   17.74 | ...31-562,568-587 
  ...l-oauth-ui.ts |   54.16 |      100 |      40 |   54.16 | 26-32,38-39,57-61 
  ...h-callback.ts |   82.94 |    75.67 |    90.9 |   82.94 | ...74-775,788-790 
  migration.ts     |       0 |        0 |       0 |       0 | 1-69              
  oauth-manager.ts |   56.44 |    56.89 |   76.92 |   56.44 | ...1991,2000-2017 
  ...h-provider.ts |   36.15 |    31.81 |      40 |   36.15 | ...52-490,498-534 
  types.ts         |     100 |      100 |     100 |     100 |                   
 src/commands      |   70.21 |      100 |      25 |   70.21 |                   
  extensions.tsx   |   56.66 |      100 |       0 |   56.66 | 23-34,38          
  mcp.ts           |   94.11 |      100 |      50 |   94.11 | 26                
 ...nds/extensions |   49.46 |       84 |    37.5 |   49.46 |                   
  disable.ts       |   17.54 |      100 |       0 |   17.54 | 17-30,36-63,65-69 
  enable.ts        |   16.12 |      100 |       0 |   16.12 | 17-36,42-68,70-74 
  install.ts       |   78.03 |    71.42 |   66.66 |   78.03 | ...09,155,158-164 
  link.ts          |   24.39 |      100 |       0 |   24.39 | 21-41,48-53,55-58 
  list.ts          |   32.14 |      100 |       0 |   32.14 | 11-27,34-35       
  new.ts           |     100 |      100 |     100 |     100 |                   
  uninstall.ts     |   44.11 |      100 |   33.33 |   44.11 | 14-22,34-39,42-45 
  update.ts        |   10.86 |      100 |       0 |   10.86 | ...43-158,160-164 
  validate.ts      |   90.21 |     87.5 |      75 |   90.21 | 49-52,59,111-114  
 ...les/mcp-server |       0 |        0 |       0 |       0 |                   
  example.ts       |       0 |        0 |       0 |       0 | 1-60              
 src/commands/mcp  |   97.15 |    86.44 |    90.9 |   97.15 |                   
  add.ts           |     100 |    96.15 |     100 |     100 | 210               
  list.ts          |   90.65 |    80.76 |      80 |   90.65 | ...11-113,138-139 
  remove.ts        |     100 |    71.42 |     100 |     100 | 19-23             
 src/config        |   84.82 |    79.16 |   74.07 |   84.82 |                   
  auth.ts          |   90.69 |    89.47 |     100 |   90.69 | 19-20,57-58       
  ...alSettings.ts |   86.66 |    88.88 |     100 |   86.66 | 40-41,44-47       
  config.ts        |   78.39 |    80.95 |   71.42 |   78.39 | ...1832,1835-1839 
  extension.ts     |   78.53 |    87.83 |   75.75 |   78.53 | ...15-816,819-820 
  keyBindings.ts   |     100 |      100 |     100 |     100 |                   
  paths.ts         |     100 |      100 |     100 |     100 |                   
  policy.ts        |   80.76 |      100 |      50 |   80.76 | 45-49             
  ...eBootstrap.ts |      86 |     82.5 |      90 |      86 | ...51-753,762-763 
  sandboxConfig.ts |    66.9 |    47.77 |   89.47 |    66.9 | ...93-500,518-519 
  ...oxProfiles.ts |    8.53 |      100 |       0 |    8.53 | 47-48,51-129      
  settings.ts      |   86.69 |    75.59 |      72 |   86.69 | ...75-776,830-831 
  ...ingsSchema.ts |   99.87 |       75 |     100 |   99.87 | 58-59             
  ...tedFolders.ts |   97.94 |    95.45 |     100 |   97.94 | 86,180-181        
  welcomeConfig.ts |   21.05 |      100 |       0 |   21.05 | ...70,73-78,81-82 
 ...fig/extensions |   71.57 |    82.72 |   91.66 |   71.57 |                   
  ...Enablement.ts |   93.87 |       96 |     100 |   93.87 | ...98-204,265-267 
  ...onSettings.ts |     100 |      100 |     100 |     100 |                   
  github.ts        |   53.01 |    82.53 |   81.81 |   53.01 | ...22-427,433-459 
  ...ntegration.ts |   90.29 |    77.77 |     100 |   90.29 | ...62-163,167-168 
  ...ingsPrompt.ts |   72.72 |    94.73 |      80 |   72.72 | 92-121            
  ...ngsStorage.ts |   73.09 |    69.81 |   92.85 |   73.09 | ...18,339-340,343 
  update.ts        |   62.34 |    46.66 |   66.66 |   62.34 | ...22-150,167-175 
  ...ableSchema.ts |     100 |      100 |     100 |     100 |                   
  variables.ts     |   95.34 |       90 |     100 |   95.34 | 30-31             
 src/constants     |     100 |      100 |     100 |     100 |                   
  historyLimits.ts |     100 |      100 |     100 |     100 |                   
 src/extensions    |   65.75 |    57.89 |      75 |   65.75 |                   
  ...utoUpdater.ts |   65.75 |    57.89 |      75 |   65.75 | ...49-450,459,461 
 src/generated     |     100 |      100 |     100 |     100 |                   
  git-commit.ts    |     100 |      100 |     100 |     100 |                   
 ...egration-tests |   90.72 |    84.61 |     100 |   90.72 |                   
  test-utils.ts    |   90.72 |    84.61 |     100 |   90.72 | ...01,219-220,230 
 src/patches       |       0 |        0 |       0 |       0 |                   
  is-in-ci.ts      |       0 |        0 |       0 |       0 | 1-17              
 src/providers     |   83.23 |    71.36 |   78.84 |   83.23 |                   
  IFileSystem.ts   |    86.2 |    85.71 |   85.71 |    86.2 | 51-52,67-68       
  ...Precedence.ts |   94.59 |    86.66 |     100 |   94.59 | 40-41             
  index.ts         |       0 |        0 |       0 |       0 | 1-19              
  ...gistration.ts |   77.94 |    68.75 |   33.33 |   77.94 | ...,93-97,103-104 
  ...derAliases.ts |   74.35 |    66.66 |     100 |   74.35 | ...43-149,154-155 
  ...onfigUtils.ts |   92.45 |       75 |     100 |   92.45 | 25-29             
  ...erInstance.ts |   84.44 |    70.77 |   79.31 |   84.44 | ...53-757,875-876 
  types.ts         |       0 |        0 |       0 |       0 | 1-8               
 ...viders/logging |   87.59 |    88.63 |   63.63 |   87.59 |                   
  ...rvice-impl.ts |   44.44 |        0 |       0 |   44.44 | 21-22,25-30,36-37 
  git-stats.ts     |   94.59 |    90.69 |     100 |   94.59 | ...48-149,180-181 
 src/runtime       |    66.4 |    72.22 |   69.67 |    66.4 |                   
  ...imeAdapter.ts |   97.03 |    89.65 |     100 |   97.03 | ...38,344-345,541 
  ...etFailover.ts |   97.05 |    91.66 |     100 |   97.05 | 31-32,215         
  messages.ts      |      20 |      100 |       0 |      20 | ...0,38-66,74-102 
  ...pplication.ts |   82.78 |    71.31 |      70 |   82.78 | ...65-668,679-680 
  ...extFactory.ts |   91.28 |    72.41 |     100 |   91.28 | ...63-266,351-358 
  ...meSettings.ts |   54.02 |    66.77 |   55.22 |   54.02 | ...2148,2173-2228 
 src/services      |   72.51 |     88.5 |   83.33 |   72.51 |                   
  ...mandLoader.ts |     100 |      100 |     100 |     100 |                   
  ...ardService.ts |    91.3 |    33.33 |     100 |    91.3 | 35-36             
  ...andService.ts |     100 |      100 |     100 |     100 |                   
  ...mandLoader.ts |   88.77 |    90.47 |     100 |   88.77 | ...79-184,258-265 
  ...omptLoader.ts |   30.68 |    81.25 |      50 |   30.68 | ...80-281,284-288 
  types.ts         |       0 |        0 |       0 |       0 | 1                 
 ...mpt-processors |   97.56 |    94.11 |     100 |   97.56 |                   
  ...tProcessor.ts |     100 |      100 |     100 |     100 |                   
  ...lProcessor.ts |   97.36 |    93.61 |     100 |   97.36 | 77-78,202-203     
  types.ts         |     100 |      100 |     100 |     100 |                   
 ...o-continuation |   85.62 |    82.14 |   94.11 |   85.62 |                   
  ...ionService.ts |   85.62 |    82.14 |   94.11 |   85.62 | ...94,553,579-580 
 src/settings      |   85.96 |     64.7 |     100 |   85.96 |                   
  ...alSettings.ts |   94.44 |       70 |     100 |   94.44 | 74-75             
  ...aramParser.ts |   71.42 |    57.14 |     100 |   71.42 | 21-22,24-25,30-31 
 src/test-utils    |      72 |     92.5 |   22.22 |      72 |                   
  ...eExtension.ts |     100 |      100 |     100 |     100 |                   
  ...omMatchers.ts |   21.21 |      100 |       0 |   21.21 | 22-50             
  ...andContext.ts |     100 |      100 |     100 |     100 |                   
  render.tsx       |   77.55 |    96.29 |   18.75 |   77.55 | ...58-179,262-263 
  ...e-testing.tsx |       0 |        0 |       0 |       0 | 1-56              
  ...iderConfig.ts |       0 |        0 |       0 |       0 | 1-19              
 src/ui            |    15.8 |    98.36 |   29.87 |    15.8 |                   
  App.tsx          |   34.48 |      100 |       0 |   34.48 | 50-85,91-98       
  AppContainer.tsx |    5.06 |      100 |       0 |    5.06 | 155-164,193-2492  
  ...tionNudge.tsx |       8 |      100 |       0 |       8 | 27-102            
  colors.ts        |   37.14 |      100 |   20.33 |   37.14 | ...03-304,306-307 
  constants.ts     |     100 |      100 |     100 |     100 |                   
  debug.ts         |     100 |      100 |     100 |     100 |                   
  ...derOptions.ts |     100 |      100 |     100 |     100 |                   
  keyMatchers.ts   |   95.65 |    96.29 |     100 |   95.65 | 29-30             
  ...ntsEnabled.ts |     100 |      100 |     100 |     100 |                   
  ...submission.ts |     100 |      100 |     100 |     100 |                   
  ...tic-colors.ts |   78.94 |      100 |      60 |   78.94 | 15-16,24-25       
  textConstants.ts |     100 |      100 |     100 |     100 |                   
  types.ts         |     100 |      100 |     100 |     100 |                   
 src/ui/commands   |   65.26 |    76.84 |   63.48 |   65.26 |                   
  aboutCommand.ts  |      75 |       24 |     100 |      75 | ...05,112-113,141 
  authCommand.ts   |   74.95 |     84.4 |   83.33 |   74.95 | ...39-642,652-676 
  ...urlCommand.ts |      30 |      100 |       0 |      30 | 20-40             
  bugCommand.ts    |   79.16 |     37.5 |     100 |   79.16 | 32-35,42,79-88    
  chatCommand.ts   |   63.38 |    77.27 |      50 |   63.38 | ...87-509,526-536 
  clearCommand.ts  |     100 |      100 |     100 |     100 |                   
  ...essCommand.ts |   12.19 |      100 |       0 |   12.19 | 16-89             
  copyCommand.ts   |   98.27 |    94.44 |     100 |   98.27 | 37                
  debugCommands.ts |   13.29 |      100 |       0 |   13.29 | ...48,455,462,469 
  ...icsCommand.ts |    62.5 |    57.14 |   33.33 |    62.5 | ...88,320,427-432 
  ...ryCommand.tsx |   16.86 |      100 |       0 |   16.86 | ...38-148,155-179 
  docsCommand.ts   |     100 |      100 |     100 |     100 |                   
  ...extCommand.ts |   93.18 |    77.77 |     100 |   93.18 | 108-113           
  editorCommand.ts |     100 |      100 |     100 |     100 |                   
  ...onsCommand.ts |   97.61 |    89.28 |     100 |   97.61 | 22,53,130         
  helpCommand.ts   |     100 |      100 |     100 |     100 |                   
  ideCommand.ts    |   66.35 |    68.96 |   55.55 |   66.35 | ...22-225,233-240 
  initCommand.ts   |   83.33 |    71.42 |   66.66 |   83.33 | 35-39,41-85       
  keyCommand.ts    |     100 |    77.77 |     100 |     100 | 47                
  ...ileCommand.ts |   11.11 |      100 |       0 |   11.11 | 23-134            
  ...ingCommand.ts |   10.96 |      100 |       0 |   10.96 | ...59-528,545-556 
  logoutCommand.ts |   15.62 |      100 |       0 |   15.62 | 21-85             
  mcpCommand.ts    |   82.16 |    82.22 |   83.33 |   82.16 | ...10-411,429-430 
  memoryCommand.ts |   88.82 |    83.87 |     100 |   88.82 | 69-83,96-101,152  
  modelCommand.ts  |     100 |     97.5 |     100 |     100 | 122               
  mouseCommand.ts  |     100 |      100 |     100 |     100 |                   
  ...onsCommand.ts |     100 |      100 |     100 |     100 |                   
  ...iesCommand.ts |   97.02 |    82.85 |     100 |   97.02 | 27,40-41          
  ...acyCommand.ts |   61.53 |      100 |       0 |   61.53 | 22-26             
  ...ileCommand.ts |   61.47 |    73.04 |   69.23 |   61.47 | ...1044,1065-1081 
  ...derCommand.ts |   53.12 |    30.55 |      80 |   53.12 | ...58-262,270-275 
  quitCommand.ts   |   34.48 |      100 |       0 |   34.48 | 16-35             
  ...oreCommand.ts |   92.53 |     87.5 |     100 |   92.53 | ...,90-91,120-125 
  setCommand.ts    |   82.84 |    77.27 |      80 |   82.84 | ...38-843,885-898 
  ...ngsCommand.ts |     100 |      100 |     100 |     100 |                   
  ...hubCommand.ts |     100 |      100 |     100 |     100 |                   
  statsCommand.ts  |   94.33 |     90.9 |     100 |   94.33 | 26-34             
  statusCommand.ts |   13.63 |      100 |       0 |   13.63 | 20-87             
  ...entCommand.ts |   83.46 |    79.71 |   83.33 |   83.46 | ...18-624,651-664 
  ...tupCommand.ts |     100 |      100 |     100 |     100 |                   
  themeCommand.ts  |     100 |      100 |     100 |     100 |                   
  ...matCommand.ts |   26.66 |      100 |       0 |   26.66 | 33-92             
  toolsCommand.ts  |   84.98 |     74.6 |     100 |   84.98 | ...85-294,307-308 
  types.ts         |     100 |      100 |     100 |     100 |                   
  ...ileCommand.ts |   61.11 |      100 |       0 |   61.11 | 16-22             
  vimCommand.ts    |   44.44 |      100 |       0 |   44.44 | 14-24             
 ...ommands/schema |   96.22 |    91.02 |    92.3 |   96.22 |                   
  index.ts         |   96.45 |    91.61 |     100 |   96.45 | ...08-412,423-424 
  types.ts         |       0 |        0 |       0 |       0 | 1                 
 src/ui/components |   10.27 |    34.73 |    2.32 |   10.27 |                   
  AboutBox.tsx     |    4.03 |      100 |       0 |    4.03 | 27-161            
  AsciiArt.ts      |     100 |      100 |     100 |     100 |                   
  AuthDialog.tsx   |    6.29 |      100 |       0 |    6.29 | 27-194            
  ...nProgress.tsx |   16.66 |      100 |       0 |   16.66 | 18-62             
  ...Indicator.tsx |   15.15 |      100 |       0 |   15.15 | 17-47             
  ...firmation.tsx |    7.31 |      100 |       0 |    7.31 | 45-179            
  ...tsDisplay.tsx |    7.69 |      100 |       0 |    7.69 | 23-34,38-156      
  CliSpinner.tsx   |       0 |        0 |       0 |       0 | 1-22              
  Composer.tsx     |     9.8 |      100 |       0 |     9.8 | 24-73             
  ...entPrompt.tsx |   18.75 |      100 |       0 |   18.75 | 21-51             
  ...ryDisplay.tsx |   21.05 |      100 |       0 |   21.05 | 17-35             
  ...ryDisplay.tsx |    4.93 |      100 |       0 |    4.93 | 26-112            
  ...geDisplay.tsx |       0 |        0 |       0 |       0 | 1-37              
  ...gProfiler.tsx |   17.88 |      100 |       0 |   17.88 | ...71-116,120-199 
  ...esDisplay.tsx |   10.52 |      100 |       0 |   10.52 | 24-82             
  ...ogManager.tsx |    9.26 |      100 |       0 |    9.26 | 59-487            
  ...ngsDialog.tsx |    6.53 |      100 |       0 |    6.53 | 27-189            
  ...rBoundary.tsx |   10.16 |        0 |       0 |   10.16 | ...16-161,179-191 
  ...ustDialog.tsx |   15.73 |      100 |       0 |   15.73 | 31-123            
  Footer.tsx       |    8.94 |      100 |     100 |    8.94 | ...30-508,512-525 
  ...ngSpinner.tsx |    40.9 |      100 |       0 |    40.9 | 31-47             
  Header.tsx       |    17.5 |      100 |       0 |    17.5 | 22-62             
  Help.tsx         |    3.17 |      100 |       0 |    3.17 | 17-179            
  ...emDisplay.tsx |   19.23 |      100 |       0 |   19.23 | 51-179            
  InputPrompt.tsx  |    38.8 |     37.2 |   66.66 |    38.8 | ...4-902,916-1065 
  ...tsDisplay.tsx |    4.41 |      100 |       0 |    4.41 | 26-37,41-249      
  ...utManager.tsx |       0 |        0 |       0 |       0 | 1-97              
  ...ileDialog.tsx |    6.89 |      100 |       0 |    6.89 | 20-119            
  ...Indicator.tsx |   14.54 |      100 |       0 |   14.54 | 24-81             
  ...ingDialog.tsx |    4.52 |      100 |       0 |    4.52 | ...9,84-90,93-354 
  ...geDisplay.tsx |       0 |        0 |       0 |       0 | 1-40              
  ModelDialog.tsx  |    1.79 |      100 |       0 |    1.79 | 53-76,79-628      
  ...tsDisplay.tsx |    6.28 |      100 |       0 |    6.28 | 33-52,56-214      
  ...fications.tsx |   15.65 |      100 |       0 |   15.65 | 36-149            
  ...odeDialog.tsx |    7.31 |      100 |       0 |    7.31 | 30-140            
  ...ustDialog.tsx |    6.21 |      100 |       0 |    6.21 | 30-237            
  PrepareLabel.tsx |   13.33 |      100 |       0 |   13.33 | 20-48             
  ...ailDialog.tsx |   11.58 |      100 |       0 |   11.58 | 57-68,71-343      
  ...ineEditor.tsx |    2.59 |      100 |       0 |    2.59 | 25-65,69-357      
  ...istDialog.tsx |    2.99 |      100 |       0 |    2.99 | 35-369            
  ...derDialog.tsx |    3.84 |      100 |       0 |    3.84 | 22-272            
  ...Indicator.tsx |       0 |        0 |       0 |       0 | 1-21              
  ...eKeyInput.tsx |       0 |        0 |       0 |       0 | 1-138             
  ...ryDisplay.tsx |      50 |      100 |       0 |      50 | 15-17             
  ...ngsDialog.tsx |    1.95 |      100 |       0 |    1.95 | ...0-109,112-1272 
  ...ionDialog.tsx |    12.5 |      100 |       0 |    12.5 | 32-114            
  ...Indicator.tsx |   44.44 |      100 |       0 |   44.44 | 12-17             
  ...MoreLines.tsx |      28 |      100 |       0 |      28 | 18-40             
  StatsDisplay.tsx |    6.82 |      100 |       0 |    6.82 | ...85-160,168-339 
  ...nsDisplay.tsx |    7.76 |      100 |       0 |    7.76 | 49-164            
  ThemeDialog.tsx  |    4.85 |      100 |       0 |    4.85 | 34-338            
  Tips.tsx         |      16 |      100 |       0 |      16 | 17-45             
  TodoPanel.tsx    |    5.55 |      100 |       0 |    5.55 | 26-74,77-245      
  ...tsDisplay.tsx |    7.42 |      100 |       0 |    7.42 | 30-53,56-228      
  ToolsDialog.tsx  |    7.86 |      100 |       0 |    7.86 | 23-119            
  ...ification.tsx |   36.36 |      100 |       0 |   36.36 | 15-22             
  ...ionDialog.tsx |    7.52 |      100 |       0 |    7.52 | 18-122            
  todo-utils.ts    |       0 |        0 |       0 |       0 | 1-7               
 ...leCreateWizard |   19.24 |       50 |       0 |   19.24 |                   
  ...aramsStep.tsx |    5.82 |      100 |       0 |    5.82 | 27-244            
  ...ationStep.tsx |    4.82 |      100 |       0 |    4.82 | 27-294            
  ...onfigStep.tsx |   11.23 |      100 |       0 |   11.23 | 25-119            
  ...electStep.tsx |    6.11 |      100 |       0 |    6.11 | 28-235            
  ...ationMenu.tsx |       0 |        0 |       0 |       0 | 1-101             
  ...eSaveStep.tsx |    6.28 |      100 |       0 |    6.28 | 33-255            
  ...ssSummary.tsx |   12.12 |      100 |       0 |   12.12 | 22-87             
  ...electStep.tsx |   16.92 |      100 |       0 |   16.92 | 27-94             
  TextInput.tsx    |     5.6 |      100 |       0 |     5.6 | 27-168            
  constants.ts     |     100 |      100 |     100 |     100 |                   
  index.tsx        |    6.27 |      100 |       0 |    6.27 | 28-296            
  types.ts         |     100 |      100 |     100 |     100 |                   
  utils.ts         |    5.22 |      100 |       0 |    5.22 | ...46-350,355-372 
  validation.ts    |   11.23 |      100 |       0 |   11.23 | ...97-104,107-111 
 ...gentManagement |     7.7 |      100 |       0 |     7.7 |                   
  ...entWizard.tsx |    4.06 |      100 |       0 |    4.06 | 33-238            
  ...ionWizard.tsx |    2.81 |      100 |       0 |    2.81 | 28-362            
  ...eteDialog.tsx |    6.59 |      100 |       0 |    6.59 | 21-126            
  ...tEditForm.tsx |    3.72 |      100 |       0 |    3.72 | 24-249            
  ...tListMenu.tsx |    3.97 |      100 |       0 |    3.97 | 25-236            
  ...tMainMenu.tsx |   18.75 |      100 |       0 |   18.75 | 19-49             
  ...gerDialog.tsx |    3.89 |      100 |       0 |    3.89 | 26-444            
  ...tShowView.tsx |    4.03 |      100 |       0 |    4.03 | 21-162            
  index.ts         |     100 |      100 |     100 |     100 |                   
  types.ts         |     100 |      100 |     100 |     100 |                   
 ...comeOnboarding |   13.04 |        0 |       0 |   13.04 |                   
  ...ethodStep.tsx |   22.47 |      100 |       0 |   22.47 | 43-128            
  ...ationStep.tsx |    5.42 |      100 |       0 |    5.42 | 28-182            
  ...etionStep.tsx |    5.08 |      100 |       0 |    5.08 | 22-164            
  ...electStep.tsx |    7.95 |      100 |       0 |    7.95 | 30-131            
  ...electStep.tsx |   34.48 |      100 |       0 |   34.48 | 50-119            
  SkipExitStep.tsx |    12.5 |      100 |       0 |    12.5 | 18-59             
  ...omeDialog.tsx |   12.38 |      100 |       0 |   12.38 | 37-146            
  WelcomeStep.tsx  |    10.2 |      100 |       0 |    10.2 | 23-74             
  index.ts         |       0 |        0 |       0 |       0 | 1-13              
 ...nents/messages |   10.69 |      100 |    3.57 |   10.69 |                   
  ...onMessage.tsx |   13.72 |      100 |       0 |   13.72 | 24-80             
  DiffRenderer.tsx |    3.33 |      100 |       0 |    3.33 | ...79-360,363-381 
  ErrorMessage.tsx |   22.22 |      100 |       0 |   22.22 | 16-31             
  ...niMessage.tsx |   15.78 |      100 |       0 |   15.78 | 27-88             
  ...geContent.tsx |   20.83 |      100 |       0 |   20.83 | 26-46             
  InfoMessage.tsx  |   26.31 |      100 |       0 |   26.31 | 17-32             
  ...rlMessage.tsx |   11.36 |      100 |       0 |   11.36 | 18-65             
  ...ckDisplay.tsx |      20 |      100 |       0 |      20 | 43-64             
  ...onMessage.tsx |    4.45 |      100 |       0 |    4.45 | 41-374            
  ...upMessage.tsx |   10.36 |      100 |       0 |   10.36 | ...2,65-80,84-254 
  ToolMessage.tsx  |    7.11 |      100 |       0 |    7.11 | ...66-400,403-406 
  UserMessage.tsx  |     100 |      100 |     100 |     100 |                   
  ...llMessage.tsx |   36.36 |      100 |       0 |   36.36 | 17-25             
  ...ngMessage.tsx |   26.31 |      100 |       0 |   26.31 | 17-32             
 ...ponents/shared |   31.87 |    56.35 |    61.9 |   31.87 |                   
  ...ctionList.tsx |    5.55 |      100 |       0 |    5.55 | 53-184            
  MaxSizedBox.tsx  |    2.37 |      100 |       0 |    2.37 | 23-50,99-625      
  ...tonSelect.tsx |   13.63 |      100 |       0 |   13.63 | 57-100            
  ...lableList.tsx |    8.49 |      100 |       0 |    8.49 | 45-153            
  ...lizedList.tsx |    2.29 |      100 |       0 |    2.29 | 56-486            
  text-buffer.ts   |    51.8 |    62.01 |   83.33 |    51.8 | ...1823-1873,1911 
  ...er-actions.ts |   30.78 |    38.59 |      50 |   30.78 | ...98-806,810-812 
 ...mponents/views |    11.7 |      100 |       0 |    11.7 |                   
  ChatList.tsx     |    14.7 |      100 |       0 |    14.7 | 18-51             
  ...sionsList.tsx |      10 |      100 |       0 |      10 | 19-80             
 src/ui/constants  |     100 |      100 |     100 |     100 |                   
  ...ollections.ts |     100 |      100 |     100 |     100 |                   
 src/ui/containers |       0 |        0 |       0 |       0 |                   
  ...ontroller.tsx |       0 |        0 |       0 |       0 | 1-340             
  UIStateShell.tsx |       0 |        0 |       0 |       0 | 1-15              
 src/ui/contexts   |   53.81 |    72.35 |   38.33 |   53.81 |                   
  ...chContext.tsx |    64.7 |      100 |      50 |    64.7 | 24-29             
  FocusContext.tsx |       0 |        0 |       0 |       0 | 1-11              
  ...ssContext.tsx |    74.4 |    76.82 |   78.57 |    74.4 | ...23-929,932-988 
  MouseContext.tsx |      70 |    68.75 |      80 |      70 | ...23-136,143-144 
  ...erContext.tsx |       0 |        0 |       0 |       0 | 1-120             
  ...owContext.tsx |   19.64 |      100 |       0 |   19.64 | 33,36,39-87       
  ...meContext.tsx |   46.92 |       25 |   28.57 |   46.92 | ...91,195-196,201 
  ...lProvider.tsx |   89.16 |    69.81 |     100 |   89.16 | ...79-380,387-388 
  ...onContext.tsx |    6.73 |      100 |       0 |    6.73 | ...88-282,287-294 
  ...teContext.tsx |       0 |        0 |       0 |       0 | 1-61              
  ...gsContext.tsx |      50 |      100 |       0 |      50 | 15-20             
  ...ngContext.tsx |   42.85 |      100 |       0 |   42.85 | 15-22             
  TodoContext.tsx  |   55.55 |      100 |       0 |   55.55 | 19-22,24-27       
  TodoProvider.tsx |    6.94 |      100 |       0 |    6.94 | 24-105            
  ...llContext.tsx |     100 |      100 |       0 |     100 |                   
  ...lProvider.tsx |    6.75 |      100 |       0 |    6.75 | 28-122            
  ...nsContext.tsx |      25 |      100 |       0 |      25 | 195-206,209-214   
  ...teContext.tsx |   27.77 |      100 |       0 |   27.77 | 235-244,247-252   
  ...deContext.tsx |   11.11 |      100 |       0 |   11.11 | 29-81,84-89       
 src/ui/editors    |   94.11 |    85.71 |   66.66 |   94.11 |                   
  ...ngsManager.ts |   94.11 |    85.71 |   66.66 |   94.11 | 55,69-70          
 src/ui/hooks      |   55.45 |    79.33 |   66.66 |   55.45 |                   
  ...dProcessor.ts |   78.19 |    77.27 |     100 |   78.19 | ...15-518,530-549 
  index.ts         |       0 |        0 |       0 |       0 | 1-9               
  ...dProcessor.ts |    96.4 |    75.67 |     100 |    96.4 | ...18-219,224-225 
  ...dProcessor.ts |   29.49 |    52.38 |      50 |   29.49 | ...76-377,382-766 
  ...dScrollbar.ts |   96.55 |      100 |     100 |   96.55 | 104-106           
  ...Completion.ts |   92.77 |    89.28 |     100 |   92.77 | ...91-192,225-228 
  ...uthCommand.ts |    6.45 |      100 |       0 |    6.45 | 15-135            
  ...tIndicator.ts |   80.95 |     87.5 |     100 |   80.95 | 37,39-49          
  ...chedScroll.ts |   16.66 |      100 |       0 |   16.66 | 14-32             
  ...ketedPaste.ts |      20 |      100 |       0 |      20 | 20-38             
  ...ompletion.tsx |   92.76 |    83.33 |     100 |   92.76 | ...22-223,227-234 
  useCompletion.ts |    92.4 |     87.5 |     100 |    92.4 | ...,95-96,100-101 
  ...leMessages.ts |       5 |      100 |       0 |       5 | 29-65,68-118      
  ...fileDialog.ts |   16.12 |      100 |       0 |   16.12 | 17-47             
  ...orSettings.ts |   11.11 |      100 |       0 |   11.11 | 29-81             
  ...AutoUpdate.ts |    9.52 |      100 |       0 |    9.52 | 18-58             
  ...ionUpdates.ts |   67.47 |    76.92 |   66.66 |   67.47 | ...79-185,200-217 
  ...erDetector.ts |     100 |      100 |     100 |     100 |                   
  useFocus.ts      |     100 |      100 |     100 |     100 |                   
  ...olderTrust.ts |     100 |      100 |     100 |     100 |                   
  ...miniStream.ts |   51.45 |    52.38 |      40 |   51.45 | ...1478,1508-1610 
  ...BranchName.ts |     100 |    88.88 |     100 |     100 | 58,61             
  ...oryManager.ts |   96.26 |     92.1 |     100 |   96.26 | ...66-167,210-211 
  ...stListener.ts |   12.12 |      100 |       0 |   12.12 | 17-50             
  ...putHistory.ts |    92.5 |    85.71 |     100 |    92.5 | 62-63,71,93-95    
  ...storyStore.ts |     100 |    94.11 |     100 |     100 | 66                
  useKeypress.ts   |   22.22 |      100 |       0 |   22.22 | 24-39             
  ...rdProtocol.ts |   36.36 |      100 |       0 |   36.36 | 24-31             
  ...fileDialog.ts |    5.71 |      100 |       0 |    5.71 | 27-135            
  ...gIndicator.ts |     100 |      100 |     100 |     100 |                   
  useLogger.ts     |   93.75 |      100 |     100 |   93.75 | 26                
  ...oryMonitor.ts |     100 |      100 |     100 |     100 |                   
  useMouse.ts      |   77.77 |    66.66 |     100 |   77.77 | 31-34             
  ...eSelection.ts |    3.13 |      100 |       0 |    3.13 | 36-103,106-315    
  ...oviderInfo.ts |       0 |        0 |       0 |       0 | 1-80              
  ...odifyTrust.ts |    9.09 |      100 |       0 |    9.09 | 46-137            
  ...raseCycler.ts |    84.9 |    76.92 |     100 |    84.9 | 43-45,48-49,65-67 
  ...cySettings.ts |   87.28 |     82.6 |     100 |   87.28 | ...21-122,133-144 
  ...Management.ts |    2.48 |      100 |       0 |    2.48 | 21-62,74-423      
  ...Completion.ts |   29.41 |       40 |     100 |   29.41 | ...14-227,236-242 
  ...iderDialog.ts |    7.89 |      100 |       0 |    7.89 | 27-110            
  ...lScheduler.ts |   67.51 |    80.64 |   77.77 |   67.51 | ...73-475,571-581 
  ...oryCommand.ts |       0 |        0 |       0 |       0 | 1-7               
  useResponsive.ts |     100 |      100 |     100 |     100 |                   
  ...ompletion.tsx |   69.56 |      100 |     100 |   69.56 | 45-47,51-66,78-81 
  ...ectionList.ts |   87.29 |    87.91 |     100 |   87.29 | ...10-411,420-423 
  useSession.ts    |       0 |        0 |       0 |       0 | 1-23              
  ...ngsCommand.ts |   18.75 |      100 |       0 |   18.75 | 10-25             
  ...ellHistory.ts |   91.66 |    79.41 |     100 |   91.66 | ...69,117-118,128 
  ...oryCommand.ts |       0 |        0 |       0 |       0 | 1-62              
  ...ompletion.tsx |   80.42 |    83.81 |      75 |   80.42 | ...54-855,857-858 
  ...leCallback.ts |     100 |      100 |     100 |     100 |                   
  ...tateAndRef.ts |   59.09 |      100 |     100 |   59.09 | 23-31             
  ...oryRefresh.ts |     100 |      100 |     100 |     100 |                   
  ...rminalSize.ts |   11.42 |      100 |       0 |   11.42 | 13-55             
  ...emeCommand.ts |    6.03 |      100 |       0 |    6.03 | 26-151            
  useTimer.ts      |   88.09 |    85.71 |     100 |   88.09 | 44-45,51-53       
  ...ntinuation.ts |       0 |        0 |       0 |       0 | 1-270             
  ...ePreserver.ts |   48.48 |      100 |      75 |   48.48 | 33-50             
  ...oolsDialog.ts |    4.67 |      100 |       0 |    4.67 | 24-145            
  ...Onboarding.ts |    2.57 |      100 |       0 |    2.57 | 76-392            
  ...eMigration.ts |   10.34 |      100 |       0 |   10.34 | 14-72             
  vim.ts           |   83.57 |     79.5 |     100 |   83.57 | ...38,742-750,759 
 src/ui/layouts    |    5.34 |      100 |       0 |    5.34 |                   
  ...AppLayout.tsx |    5.34 |      100 |       0 |    5.34 | 57-74,77-641      
 ...noninteractive |      75 |      100 |    6.66 |      75 |                   
  ...eractiveUi.ts |      75 |      100 |    6.66 |      75 | 17-19,23-24,27-28 
 src/ui/privacy    |   25.78 |      100 |       0 |   25.78 |                   
  ...acyNotice.tsx |   10.97 |      100 |       0 |   10.97 | 22-123            
  ...acyNotice.tsx |   14.28 |      100 |       0 |   14.28 | 16-59             
  ...acyNotice.tsx |   12.19 |      100 |       0 |   12.19 | 16-62             
  ...acyNotice.tsx |   41.33 |      100 |       0 |   41.33 | 78-91,99-193      
  ...acyNotice.tsx |   21.95 |      100 |       0 |   21.95 | 20-59,62-64       
 src/ui/reducers   |   78.44 |     90.9 |      50 |   78.44 |                   
  appReducer.ts    |     100 |      100 |     100 |     100 |                   
  ...ionReducer.ts |       0 |        0 |       0 |       0 | 1-52              
 src/ui/state      |   61.25 |    33.33 |     100 |   61.25 |                   
  extensions.ts    |   61.25 |    33.33 |     100 |   61.25 | ...22,124-127,129 
 src/ui/themes     |   99.17 |    81.11 |   96.15 |   99.17 |                   
  ansi-light.ts    |     100 |      100 |     100 |     100 |                   
  ansi.ts          |     100 |      100 |     100 |     100 |                   
  atom-one-dark.ts |     100 |      100 |     100 |     100 |                   
  ayu-light.ts     |     100 |      100 |     100 |     100 |                   
  ayu.ts           |     100 |      100 |     100 |     100 |                   
  color-utils.ts   |     100 |      100 |     100 |     100 |                   
  default-light.ts |     100 |      100 |     100 |     100 |                   
  default.ts       |     100 |      100 |     100 |     100 |                   
  dracula.ts       |     100 |      100 |     100 |     100 |                   
  github-dark.ts   |     100 |      100 |     100 |     100 |                   
  github-light.ts  |     100 |      100 |     100 |     100 |                   
  googlecode.ts    |     100 |      100 |     100 |     100 |                   
  green-screen.ts  |     100 |      100 |     100 |     100 |                   
  no-color.ts      |     100 |      100 |     100 |     100 |                   
  ...c-resolver.ts |     100 |      100 |     100 |     100 |                   
  ...tic-tokens.ts |     100 |      100 |     100 |     100 |                   
  ...-of-purple.ts |     100 |      100 |     100 |     100 |                   
  theme-compat.ts  |     100 |       50 |     100 |     100 | 79                
  theme-manager.ts |   89.74 |    82.53 |     100 |   89.74 | ...04-310,315-316 
  theme.ts         |   99.51 |    76.84 |   85.71 |   99.51 | 269-270           
  xcode.ts         |     100 |      100 |     100 |     100 |                   
 src/ui/utils      |   43.03 |    86.63 |    61.6 |   43.03 |                   
  ...Colorizer.tsx |    5.76 |      100 |       0 |    5.76 | ...16-128,140-232 
  ...olePatcher.ts |      78 |    77.77 |     100 |      78 | 58-69             
  ...nRenderer.tsx |    9.15 |      100 |       0 |    9.15 | 26-170,179-188    
  ...wnDisplay.tsx |    5.63 |      100 |       0 |    5.63 | ...00-425,436-440 
  ...eRenderer.tsx |   10.63 |      100 |       0 |   10.63 | ...32-247,260-395 
  ...ketedPaste.ts |   55.55 |      100 |       0 |   55.55 | 11-12,15-16       
  clipboard.ts     |   97.29 |    84.61 |     100 |   97.29 | 40                
  ...boardUtils.ts |   32.25 |     37.5 |     100 |   32.25 | ...55-114,129-145 
  commandUtils.ts  |   93.44 |    89.79 |     100 |   93.44 | ...31,135,137-138 
  computeStats.ts  |     100 |      100 |     100 |     100 |                   
  displayUtils.ts  |     100 |      100 |     100 |     100 |                   
  formatters.ts    |   90.47 |    95.23 |     100 |   90.47 | 57-60             
  fuzzyFilter.ts   |     100 |    96.42 |     100 |     100 | 75                
  highlight.ts     |   65.43 |      100 |   66.66 |   65.43 | 77-110            
  input.ts         |   64.51 |    88.88 |   33.33 |   64.51 | 18-25,51-58       
  ...olDetector.ts |    6.36 |      100 |       0 |    6.36 | ...51-152,155-156 
  ...nUtilities.ts |   69.84 |    85.71 |     100 |   69.84 | 75-91,100-101     
  mouse.ts         |   83.69 |    71.42 |     100 |   83.69 | ...03,210,223-224 
  ...mConstants.ts |     100 |      100 |     100 |     100 |                   
  ...opDetector.ts |       0 |        0 |       0 |       0 | 1-209             
  responsive.ts    |    69.9 |    73.33 |      80 |    69.9 | ...95-103,106-121 
  ...putHandler.ts |   87.36 |    90.32 |     100 |   87.36 | 52-53,74-83       
  ...alContract.ts |     100 |      100 |     100 |     100 |                   
  terminalLinks.ts |     100 |      100 |     100 |     100 |                   
  ...lSequences.ts |     100 |      100 |     100 |     100 |                   
  terminalSetup.ts |    4.03 |      100 |       0 |    4.03 | 40-340            
  textUtils.ts     |   74.77 |    94.59 |   72.72 |   74.77 | ...14-115,135-137 
  ...Formatters.ts |   17.39 |      100 |       0 |   17.39 | 14-21,29-36,50-52 
  ...icsTracker.ts |     100 |    66.66 |     100 |     100 | 32-34             
  ui-sizing.ts     |   21.05 |      100 |       0 |   21.05 | 11-23,26-31       
  updateCheck.ts   |     100 |    93.75 |     100 |     100 | 33,44             
 src/utils         |   57.49 |    88.38 |   81.25 |   57.49 |                   
  ...ionContext.ts |   79.59 |       75 |     100 |   79.59 | 37-40,62-63,78-81 
  bootstrap.ts     |   94.11 |    88.88 |     100 |   94.11 | 71-72             
  checks.ts        |   33.33 |      100 |       0 |   33.33 | 23-28             
  cleanup.ts       |   72.72 |      100 |      75 |   72.72 | 43-52             
  commands.ts      |    50.9 |    63.63 |     100 |    50.9 | 25-26,45,57-84    
  commentJson.ts   |    92.3 |     92.5 |     100 |    92.3 | 94-102            
  ...ScopeUtils.ts |   19.23 |      100 |       0 |   19.23 | 23-40,46-73       
  ...icSettings.ts |   88.61 |    88.88 |     100 |   88.61 | ...37,40-43,61-64 
  ...arResolver.ts |   96.42 |    96.15 |     100 |   96.42 | 111-112           
  errors.ts        |   94.59 |       88 |     100 |   94.59 | 49-50,88-89       
  events.ts        |     100 |      100 |     100 |     100 |                   
  gitUtils.ts      |    92.5 |    82.35 |     100 |    92.5 | 61-62,77-80       
  ...AutoUpdate.ts |   67.05 |    77.08 |   71.42 |   67.05 | ...13-214,261-326 
  ...lationInfo.ts |   99.36 |    97.82 |     100 |   99.36 | 93                
  math.ts          |   66.66 |      100 |       0 |   66.66 | 15                
  readStdin.ts     |   79.24 |       90 |      80 |   79.24 | 31-38,50-52       
  relaunch.ts      |     100 |      100 |     100 |     100 |                   
  resolvePath.ts   |   66.66 |       25 |     100 |   66.66 | 12-13,16,18-19    
  sandbox.ts       |    5.46 |      100 |   18.18 |    5.46 | 31-42,103-1199    
  ...ionCleanup.ts |   94.58 |    87.69 |     100 |   94.58 | ...74-175,256-257 
  sessionUtils.ts  |    9.23 |      100 |       0 |    9.23 | 43-99,106-120     
  settingsUtils.ts |   84.14 |    90.52 |   93.33 |   84.14 | ...12-439,478-479 
  ...ttingSaver.ts |    1.92 |      100 |       0 |    1.92 | 7-28,36-81        
  spawnWrapper.ts  |     100 |      100 |     100 |     100 |                   
  ...upWarnings.ts |     100 |      100 |     100 |     100 |                   
  stdinSafety.ts   |   93.24 |    86.48 |     100 |   93.24 | ...62-163,167,242 
  ...entEmitter.ts |     100 |      100 |     100 |     100 |                   
  ...upWarnings.ts |     100 |      100 |     100 |     100 |                   
  version.ts       |     100 |       50 |     100 |     100 | 16                
  windowTitle.ts   |     100 |      100 |     100 |     100 |                   
 src/utils/privacy |    46.3 |    68.57 |   52.63 |    46.3 |                   
  ...taRedactor.ts |   60.66 |    70.58 |   55.55 |   60.66 | ...77-479,485-506 
  ...acyManager.ts |       0 |        0 |       0 |       0 | 1-178             
 ...ed-integration |   22.23 |        0 |       0 |   22.23 |                   
  acp.ts           |   14.63 |        0 |       0 |   14.63 | ...31-332,335-342 
  ...temService.ts |   20.58 |      100 |       0 |   20.58 | ...34,37-46,48-49 
  schema.ts        |     100 |      100 |     100 |     100 |                   
  ...ntegration.ts |    4.41 |      100 |       0 |    4.41 | ...1449,1464-1514 
-------------------|---------|----------|---------|---------|-------------------
Core Package - Full Text Report
-------------------|---------|----------|---------|---------|-------------------
File               | % Stmts | % Branch | % Funcs | % Lines | Uncovered Line #s 
-------------------|---------|----------|---------|---------|-------------------
All files          |   71.16 |    78.92 |   73.28 |   71.16 |                   
 src               |     100 |      100 |     100 |     100 |                   
  index.ts         |     100 |      100 |     100 |     100 |                   
 src/__mocks__/fs  |       0 |        0 |       0 |       0 |                   
  promises.ts      |       0 |        0 |       0 |       0 | 1-48              
 src/adapters      |     100 |      100 |     100 |     100 |                   
  ...eamAdapter.ts |     100 |      100 |     100 |     100 |                   
 src/agents        |    77.5 |     68.1 |      90 |    77.5 |                   
  ...vestigator.ts |       0 |        0 |       0 |       0 | 1-152             
  executor.ts      |   88.25 |    67.03 |     100 |   88.25 | ...04-705,741-747 
  invocation.ts    |   96.34 |    76.47 |     100 |   96.34 | 61,65-66          
  registry.ts      |       0 |        0 |       0 |       0 | 1-83              
  types.ts         |     100 |      100 |     100 |     100 |                   
  utils.ts         |   78.94 |       80 |     100 |   78.94 | 32-35             
 src/auth          |   66.07 |    80.16 |   76.76 |   66.07 |                   
  ...evice-flow.ts |    7.21 |      100 |       0 |    7.21 | ...49-268,274-282 
  ...evice-flow.ts |   46.65 |    57.14 |   63.63 |   46.65 | ...95-484,494-580 
  oauth-errors.ts  |   94.15 |    83.33 |     100 |   94.15 | ...68,609,635-636 
  precedence.ts    |   76.75 |    78.15 |   94.44 |   76.75 | ...1028,1034-1037 
  ...evice-flow.ts |    8.33 |        0 |       0 |    8.33 | ...69-206,214-220 
  token-store.ts   |      81 |    87.65 |   93.75 |      81 | ...76-477,488-496 
  types.ts         |     100 |      100 |     100 |     100 |                   
 src/code_assist   |   68.41 |    78.81 |      78 |   68.41 |                   
  codeAssist.ts    |   16.25 |       50 |   33.33 |   16.25 | ...1,80-87,95-108 
  converter.ts     |   94.96 |    93.02 |     100 |   94.96 | ...88,202,219-220 
  ...al-storage.ts |   98.21 |       75 |     100 |   98.21 | 70,119            
  oauth2.ts        |    63.4 |    75.29 |   78.57 |    63.4 | ...16-717,722-723 
  server.ts        |   51.89 |    72.72 |   53.84 |   51.89 | ...99-240,243-246 
  setup.ts         |   82.92 |    73.91 |     100 |   82.92 | ...27-129,153-159 
  types.ts         |     100 |      100 |     100 |     100 |                   
 src/commands      |     100 |      100 |     100 |     100 |                   
  extensions.ts    |     100 |      100 |     100 |     100 |                   
 src/config        |   74.44 |    78.81 |   59.11 |   74.44 |                   
  config.ts        |      73 |    79.28 |   47.46 |      73 | ...2003,2017-2018 
  constants.ts     |     100 |      100 |     100 |     100 |                   
  endpoints.ts     |     100 |      100 |     100 |     100 |                   
  index.ts         |     100 |      100 |     100 |     100 |                   
  models.ts        |     100 |      100 |     100 |     100 |                   
  ...ileManager.ts |   83.61 |    77.02 |     100 |   83.61 | ...09-413,415-419 
  ...rSingleton.ts |   77.56 |    82.85 |   41.66 |   77.56 | ...45,248-251,259 
  storage.ts       |   90.65 |     86.2 |   92.59 |   90.65 | ...67,69,71,96-97 
  ...entManager.ts |   57.91 |    65.57 |     100 |   57.91 | ...57-458,476-500 
  types.ts         |       0 |        0 |       0 |       0 |                   
 ...nfirmation-bus |   68.39 |    88.46 |   66.66 |   68.39 |                   
  index.ts         |       0 |        0 |       0 |       0 | 1-2               
  message-bus.ts   |   67.48 |    91.66 |   72.72 |   67.48 | ...04-238,247-255 
  types.ts         |     100 |      100 |     100 |     100 |                   
 src/core          |   65.74 |    73.58 |   69.82 |   65.74 |                   
  baseLlmClient.ts |   97.26 |       90 |     100 |   97.26 | 55-56,244-245     
  ...ntegration.ts |    96.5 |    95.65 |     100 |    96.5 | ...28-129,209-210 
  client.ts        |    57.2 |    73.55 |   66.66 |    57.2 | ...2136,2141-2152 
  ...ion-config.ts |     100 |      100 |     100 |     100 |                   
  ...tGenerator.ts |   91.08 |    80.76 |     100 |   91.08 | ...32,148,163-166 
  ...lScheduler.ts |   76.92 |    75.48 |   90.24 |   76.92 | ...1872,1876-1882 
  geminiChat.ts    |   59.45 |    67.09 |      60 |   59.45 | ...3030,3053-3054 
  geminiRequest.ts |     100 |      100 |     100 |     100 |                   
  ...nAIWrapper.ts |   88.88 |      100 |   83.33 |   88.88 | 56-59             
  logger.ts        |   81.26 |    81.81 |     100 |   81.26 | ...64-378,419-430 
  ...tGenerator.ts |   10.89 |      100 |       0 |   10.89 | ...93-194,197-200 
  ...olExecutor.ts |   66.66 |    80.76 |   55.55 |   66.66 | ...52-153,198-232 
  prompts.ts       |   67.98 |    65.62 |      70 |   67.98 | ...95,311,352-355 
  subagent.ts      |   52.01 |    65.42 |   57.14 |   52.01 | ...1969,1981-1982 
  ...chestrator.ts |    89.1 |    73.56 |   95.23 |    89.1 | ...17,620-621,626 
  ...tScheduler.ts |       0 |        0 |       0 |       0 | 1                 
  tokenLimits.ts   |   90.27 |    73.07 |     100 |   90.27 | ...72,77,79,83,93 
  ...Governance.ts |    94.2 |     90.9 |     100 |    94.2 | 34-35,51-52       
  turn.ts          |    92.2 |    73.68 |     100 |    92.2 | ...29-430,460-461 
 src/debug         |   78.28 |    87.83 |   89.65 |   78.28 |                   
  ...ionManager.ts |   77.88 |    78.04 |      85 |   77.88 | ...33-234,251-255 
  DebugLogger.ts   |   91.07 |    90.32 |      88 |   91.07 | ...72,211-215,252 
  FileOutput.ts    |   91.79 |    93.02 |     100 |   91.79 | ...,93-97,117-118 
  ...ionManager.ts |       0 |      100 |     100 |       0 | 18-64             
  ...FileOutput.ts |       0 |      100 |     100 |       0 | 15-37             
  index.ts         |     100 |      100 |     100 |     100 |                   
  types.ts         |       0 |        0 |       0 |       0 |                   
 src/filters       |   99.19 |    98.79 |     100 |   99.19 |                   
  EmojiFilter.ts   |   99.19 |    98.79 |     100 |   99.19 | 208-209           
 src/hooks         |   73.36 |    78.98 |   66.66 |   73.36 |                   
  hookPlanner.ts   |   98.82 |     93.1 |     100 |   98.82 | 97                
  hookRegistry.ts  |    90.1 |    82.14 |     100 |    90.1 | ...66,268,270,272 
  ...Translator.ts |   94.62 |    67.44 |     100 |   94.62 | ...87-288,299,348 
  index.ts         |     100 |      100 |     100 |     100 |                   
  ...ssion-hook.ts |   88.88 |    33.33 |     100 |   88.88 | 24,30             
  types.ts         |   27.61 |    85.71 |       0 |   27.61 | ...11-328,339-368 
 src/ide           |   72.74 |    84.61 |   72.54 |   72.74 |                   
  constants.ts     |     100 |      100 |     100 |     100 |                   
  detect-ide.ts    |     100 |      100 |     100 |     100 |                   
  ide-client.ts    |   54.41 |    75.51 |   54.83 |   54.41 | ...70-478,506-514 
  ide-installer.ts |   90.55 |    85.18 |     100 |   90.55 | ...35,142-146,159 
  ideContext.ts    |    83.8 |      100 |     100 |    83.8 | 75-91             
  process-utils.ts |   89.13 |    82.14 |     100 |   89.13 | ...69-170,211-212 
 src/interfaces    |       0 |        0 |       0 |       0 |                   
  index.ts         |       0 |        0 |       0 |       0 |                   
  ....interface.ts |       0 |        0 |       0 |       0 |                   
 src/mcp           |   79.34 |    78.18 |   71.95 |   79.34 |                   
  ...oken-store.ts |   87.38 |    90.47 |   81.25 |   87.38 | ...33-334,337-338 
  ...h-provider.ts |   87.14 |      100 |      25 |   87.14 | ...96,100,104-105 
  ...h-provider.ts |   75.07 |    55.88 |     100 |   75.07 | ...21-928,935-937 
  ...en-storage.ts |    81.5 |    88.88 |   68.18 |    81.5 | ...95-196,201-202 
  oauth-utils.ts   |   72.19 |    85.29 |   91.66 |   72.19 | ...64-285,310-333 
  ...n-provider.ts |      88 |    94.73 |   33.33 |      88 | ...38,142,146-147 
  token-store.ts   |     100 |      100 |     100 |     100 |                   
 .../token-storage |   87.57 |    87.94 |   93.02 |   87.57 |                   
  ...en-storage.ts |     100 |      100 |     100 |     100 |                   
  ...en-storage.ts |   86.61 |    87.09 |   92.85 |   86.61 | ...64-172,180-181 
  ...en-storage.ts |     100 |      100 |     100 |     100 |                   
  ...en-storage.ts |   82.55 |    83.11 |   84.61 |   82.55 | ...60,262,314-315 
  types.ts         |     100 |      100 |     100 |     100 |                   
 src/models        |   83.44 |    91.15 |    87.5 |   83.44 |                   
  hydration.ts     |    4.76 |      100 |       0 |    4.76 | 65-129,151-231    
  index.ts         |     100 |      100 |     100 |     100 |                   
  profiles.ts      |     100 |      100 |     100 |     100 |                   
  ...ntegration.ts |   95.31 |    85.36 |     100 |   95.31 | ...35-136,199-200 
  registry.ts      |   90.45 |    88.88 |      92 |   90.45 | ...69-270,389-402 
  schema.ts        |     100 |      100 |     100 |     100 |                   
  transformer.ts   |     100 |      100 |     100 |     100 |                   
 src/parsers       |    70.7 |       75 |    86.2 |    70.7 |                   
  ...CallParser.ts |    70.7 |       75 |    86.2 |    70.7 | ...1,983,989-1004 
 src/policy        |   81.37 |    79.81 |   82.14 |   81.37 |                   
  config.ts        |   64.06 |    62.79 |   71.42 |   64.06 | ...10-417,427-445 
  index.ts         |     100 |      100 |     100 |     100 |                   
  policy-engine.ts |   96.42 |    97.61 |   88.88 |   96.42 | 171-174           
  ...-stringify.ts |   82.55 |     64.1 |      50 |   82.55 | ...22-126,139-140 
  toml-loader.ts   |   90.22 |     86.2 |     100 |   90.22 | ...60,462,469,471 
  types.ts         |     100 |      100 |     100 |     100 |                   
 src/prompt-config |   74.43 |    83.81 |   85.05 |   74.43 |                   
  ...lateEngine.ts |   91.75 |    85.91 |     100 |   91.75 | ...48-249,264-267 
  index.ts         |       0 |      100 |     100 |       0 | 5-41              
  prompt-cache.ts  |   99.06 |     97.4 |     100 |   99.06 | 211-212           
  ...-installer.ts |   83.11 |    82.47 |     100 |   83.11 | ...1173,1253-1254 
  prompt-loader.ts |   87.27 |    90.42 |   76.92 |   87.27 | ...22-423,429-430 
  ...t-resolver.ts |   34.85 |    64.17 |   53.84 |   34.85 | ...20-771,774-802 
  ...pt-service.ts |   82.35 |    83.33 |   78.94 |   82.35 | ...37,568,580-581 
  ...delegation.ts |   58.33 |       50 |     100 |   58.33 | 24-34             
  types.ts         |     100 |      100 |     100 |     100 |                   
 ...onfig/defaults |   50.17 |    47.09 |     100 |   50.17 |                   
  core-defaults.ts |   37.54 |    39.02 |     100 |   37.54 | ...72,283,289-297 
  index.ts         |     100 |      100 |     100 |     100 |                   
  ...est-loader.ts |   81.81 |       80 |     100 |   81.81 | ...02-108,116-120 
  ...t-warnings.ts |      92 |    33.33 |     100 |      92 | 17-18             
  ...r-defaults.ts |    41.7 |    39.02 |     100 |    41.7 | ...40,251,257-262 
  ...e-defaults.ts |     100 |      100 |     100 |     100 |                   
  tool-defaults.ts |      50 |       40 |     100 |      50 | ...11-216,229-234 
 src/prompts       |   26.41 |      100 |      25 |   26.41 |                   
  mcp-prompts.ts   |   18.18 |      100 |       0 |   18.18 | 11-19             
  ...t-registry.ts |   28.57 |      100 |   28.57 |   28.57 | ...42,48-55,68-73 
 src/providers     |   67.86 |    77.75 |   67.21 |   67.86 |                   
  BaseProvider.ts  |   80.65 |    78.94 |   80.76 |   80.65 | ...1195,1198-1199 
  ...eratorRole.ts |     100 |      100 |     100 |     100 |                   
  IModel.ts        |       0 |        0 |       0 |       0 |                   
  IProvider.ts     |       0 |        0 |       0 |       0 |                   
  ...derManager.ts |     100 |      100 |     100 |     100 |                   
  ITool.ts         |       0 |        0 |       0 |       0 |                   
  ...ngProvider.ts |   87.91 |     89.6 |   90.62 |   87.91 | ...1106,1137-1139 
  ...derWrapper.ts |   56.67 |    63.86 |   51.28 |   56.67 | ...1356,1363-1370 
  ...tGenerator.ts |    17.3 |      100 |       0 |    17.3 | ...59,62-79,82-85 
  ...derManager.ts |   57.48 |    73.41 |   60.46 |   57.48 | ...1523-1524,1527 
  errors.ts        |   78.57 |    77.77 |      60 |   78.57 | ...43,150-170,191 
  ...ConfigKeys.ts |     100 |      100 |     100 |     100 |                   
  types.ts         |       0 |        0 |       0 |       0 | 1                 
 ...ders/anthropic |   78.39 |    80.04 |   74.57 |   78.39 |                   
  ...icProvider.ts |   80.57 |    82.83 |      75 |   80.57 | ...2638,2646-2647 
  ...aConverter.ts |   52.53 |    44.11 |   71.42 |   52.53 | ...57,263,280-288 
 ...pic/test-utils |       0 |        0 |       0 |       0 |                   
  ...cTestUtils.ts |       0 |        0 |       0 |       0 |                   
 ...oviders/gemini |   59.34 |     66.8 |   51.21 |   59.34 |                   
  ...niProvider.ts |    55.1 |    55.49 |   48.71 |    55.1 | ...1879,1888-1889 
  ...Signatures.ts |     100 |    98.38 |     100 |     100 | 182               
 ...viders/logging |   39.53 |       80 |      75 |   39.53 |                   
  ...tExtractor.ts |       0 |        0 |       0 |       0 | 1-228             
  ...nceTracker.ts |   89.47 |    84.21 |   81.81 |   89.47 | ...66-167,182-183 
 ...oviders/openai |   52.46 |    74.17 |   60.73 |   52.46 |                   
  ...ationCache.ts |   70.49 |    86.66 |   82.35 |   70.49 | ...64-166,216-217 
  ...rateParams.ts |       0 |        0 |       0 |       0 |                   
  ...AIProvider.ts |   38.55 |    65.78 |   45.55 |   38.55 | ...4990,4998-5007 
  ...API_MODELS.ts |     100 |      100 |     100 |     100 |                   
  ...lCollector.ts |   93.33 |    89.28 |     100 |   93.33 | ...51-153,173-174 
  ...Normalizer.ts |   92.75 |       96 |     100 |   92.75 | 74-78             
  ...llPipeline.ts |   64.54 |    53.33 |      75 |   64.54 | ...34-143,174-184 
  ...eValidator.ts |   94.02 |    93.75 |     100 |   94.02 | 106-109           
  ...sesRequest.ts |   83.85 |    93.24 |     100 |   83.85 | ...64,297,302-307 
  ...moteTokens.ts |   89.55 |     92.3 |     100 |   89.55 | 101-107           
  ...oviderInfo.ts |    86.2 |    73.52 |     100 |    86.2 | ...31-133,144-145 
  ...uestParams.ts |   87.27 |    57.69 |     100 |   87.27 | ...20-121,123-124 
  ...nsesStream.ts |   86.31 |     82.1 |     100 |   86.31 | ...75,498-505,529 
  ...aConverter.ts |    24.2 |    42.85 |   28.57 |    24.2 | ...59-260,277-285 
  ...lResponses.ts |   71.98 |    73.33 |      75 |   71.98 | ...97-301,321-335 
  test-types.ts    |       0 |        0 |       0 |       0 |                   
  toolNameUtils.ts |   96.79 |    95.45 |      50 |   96.79 | 102,127,239-241   
 ...enai-responses |   68.92 |     78.3 |   45.16 |   68.92 |                   
  CODEX_MODELS.ts  |     100 |      100 |     100 |     100 |                   
  ...esProvider.ts |   79.97 |    81.81 |   54.54 |   79.97 | ...1045-1050,1074 
  ...romContent.ts |   84.93 |    66.66 |     100 |   84.93 | 45-49,71-75,94    
  index.ts         |       0 |        0 |       0 |       0 | 1                 
  ...aConverter.ts |    8.12 |       20 |   14.28 |    8.12 | ...53-277,280-289 
 .../openai-vercel |   69.18 |     67.2 |   66.66 |   69.18 |                   
  ...elProvider.ts |   67.29 |    64.08 |   54.34 |   67.29 | ...1925,1932,1938 
  errors.ts        |   93.23 |    82.05 |     100 |   93.23 | ...50-151,165-169 
  index.ts         |     100 |      100 |     100 |     100 |                   
  ...Conversion.ts |   71.63 |    73.17 |   83.33 |   71.63 | ...45,548-549,553 
  ...aConverter.ts |   50.95 |       40 |   71.42 |   50.95 | ...58-259,276-284 
  toolIdUtils.ts   |     100 |      100 |     100 |     100 |                   
 ...ders/reasoning |    42.1 |       90 |      70 |    42.1 |                   
  ...oningUtils.ts |    42.1 |       90 |      70 |    42.1 | ...45-203,235-310 
 ...ers/test-utils |     100 |      100 |     100 |     100 |                   
  ...TestConfig.ts |     100 |      100 |     100 |     100 |                   
 ...ers/tokenizers |   69.49 |    77.77 |      75 |   69.49 |                   
  ...cTokenizer.ts |   68.42 |       75 |     100 |   68.42 | 34-39             
  ITokenizer.ts    |       0 |        0 |       0 |       0 |                   
  ...ITokenizer.ts |      70 |       80 |   66.66 |      70 | 52-55,62-71       
 ...roviders/types |       0 |        0 |       0 |       0 |                   
  ...iderConfig.ts |       0 |        0 |       0 |       0 |                   
  ...derRuntime.ts |       0 |        0 |       0 |       0 |                   
 ...roviders/utils |   87.94 |     87.5 |   95.83 |   87.94 |                   
  authToken.ts     |   33.33 |       50 |      50 |   33.33 | 14-22,30-35       
  ...sExtractor.ts |   95.45 |     91.3 |     100 |   95.45 | 15-16             
  dumpContext.ts   |    96.1 |    95.65 |     100 |    96.1 | 110-112           
  ...SDKContext.ts |   94.59 |       75 |     100 |   94.59 | 27,49             
  localEndpoint.ts |   89.28 |    91.42 |     100 |   89.28 | ...18-119,138-139 
  ...malization.ts |     100 |      100 |     100 |     100 |                   
  ...nsePayload.ts |   92.63 |    84.74 |     100 |   92.63 | ...42-147,200-204 
  userMemory.ts    |   51.51 |       60 |     100 |   51.51 | 16-18,31-43       
 src/runtime       |   84.02 |    86.24 |   77.33 |   84.02 |                   
  ...imeContext.ts |     100 |      100 |     100 |     100 |                   
  ...timeLoader.ts |      85 |       70 |      80 |      85 | ...87-190,228-231 
  ...ntimeState.ts |   95.22 |    92.07 |     100 |   95.22 | ...35-636,652-653 
  ...ionContext.ts |   85.89 |    94.11 |   85.71 |   85.89 | 80-82,149-156     
  ...imeContext.ts |   82.03 |      100 |   64.28 |   82.03 | ...27-130,132-137 
  index.ts         |       0 |        0 |       0 |       0 | 1-15              
  ...imeContext.ts |    64.7 |    83.33 |     100 |    64.7 | 67-78,83-94       
  ...meAdapters.ts |   54.95 |    70.58 |      50 |   54.95 | ...98-108,125-152 
  ...ateFactory.ts |    96.9 |    86.48 |     100 |    96.9 | 95,110,136        
 src/services      |   80.01 |    84.25 |   75.67 |   80.01 |                   
  ...ardService.ts |   93.33 |    92.85 |     100 |   93.33 | 63,67-68          
  ...y-analyzer.ts |   76.32 |    81.17 |   77.77 |   76.32 | ...79-507,513-514 
  ...eryService.ts |   97.03 |     90.9 |     100 |   97.03 | 47,56,140-141     
  ...temService.ts |    61.9 |      100 |   66.66 |    61.9 | 54-61             
  ...ts-service.ts |      50 |      100 |       0 |      50 | 41-42,48-49       
  gitService.ts    |   70.58 |    93.33 |      60 |   70.58 | ...16-126,129-133 
  index.ts         |       0 |        0 |       0 |       0 | 1-15              
  ...ionService.ts |   99.04 |    98.41 |     100 |   99.04 | 270-271           
  ...ionService.ts |   82.51 |    81.66 |   84.21 |   82.51 | ...29-754,763-784 
  ...xt-tracker.ts |   94.87 |       90 |    87.5 |   94.87 | 54-55             
  ...er-service.ts |   42.42 |     90.9 |      25 |   42.42 | ...36-139,142-160 
  ...er-service.ts |   69.45 |    55.88 |      80 |   69.45 | ...85-289,311-314 
 ...rvices/history |   82.32 |    83.66 |    88.4 |   82.32 |                   
  ...Converters.ts |   87.19 |    82.64 |   85.71 |   87.19 | ...44,351,357-363 
  HistoryEvents.ts |       0 |        0 |       0 |       0 |                   
  ...oryService.ts |   79.39 |    83.27 |   87.03 |   79.39 | ...1402,1438-1439 
  IContent.ts      |   88.05 |    80.76 |     100 |   88.05 | ...33-234,251-254 
  ...calToolIds.ts |   96.82 |    93.33 |     100 |   96.82 | 36-37             
 src/settings      |   82.05 |    84.47 |   60.71 |   82.05 |                   
  ...ngsService.ts |   91.69 |       75 |   95.23 |   91.69 | ...53-354,384-388 
  index.ts         |     100 |      100 |     100 |     100 |                   
  ...gsRegistry.ts |   78.99 |     90.8 |   35.48 |   78.99 | ...1144,1147-1164 
  ...ceInstance.ts |     100 |      100 |     100 |     100 |                   
  types.ts         |       0 |        0 |       0 |       0 | 1                 
 src/storage       |   93.53 |    93.02 |   94.44 |   93.53 |                   
  ...FileWriter.ts |   83.54 |       80 |    87.5 |   83.54 | 40-41,71-81       
  ...nceService.ts |   98.67 |    96.96 |     100 |   98.67 | 293-294           
  sessionTypes.ts  |     100 |      100 |     100 |     100 |                   
 src/telemetry     |   65.57 |    79.79 |   60.33 |   65.57 |                   
  constants.ts     |     100 |      100 |     100 |     100 |                   
  ...-exporters.ts |   28.08 |      100 |       0 |   28.08 | ...14-115,118-119 
  index.ts         |     100 |      100 |     100 |     100 |                   
  ...t.circular.ts |       0 |        0 |       0 |       0 | 1-17              
  ...t.circular.ts |       0 |        0 |       0 |       0 | 1-132             
  loggers.ts       |   63.35 |    69.76 |   59.25 |   63.35 | ...71-584,592-608 
  metrics.ts       |   62.35 |    96.15 |   66.66 |   62.35 | ...41-163,166-189 
  sdk.ts           |   72.54 |    23.07 |     100 |   72.54 | ...35,140-141,143 
  ...l-decision.ts |   33.33 |      100 |       0 |   33.33 | 17-32             
  types.ts         |   73.94 |    84.88 |   64.91 |   73.94 | ...34-636,639-643 
  uiTelemetry.ts   |   95.26 |    96.15 |   91.66 |   95.26 | 152,189-195       
 src/test-utils    |      87 |    84.29 |   57.47 |      87 |                   
  config.ts        |     100 |      100 |     100 |     100 |                   
  index.ts         |       0 |        0 |       0 |       0 | 1-9               
  mock-tool.ts     |   96.25 |    93.33 |   81.81 |   96.25 | 62-63,118         
  ...aceContext.ts |     100 |      100 |     100 |     100 |                   
  ...allOptions.ts |   93.92 |    91.48 |   63.63 |   93.92 | ...19,187,216-219 
  runtime.ts       |   80.18 |       75 |   39.53 |   80.18 | ...99-301,310-312 
  tools.ts         |      82 |    76.92 |   78.94 |      82 | ...31,153,157-158 
 src/todo          |   51.55 |    83.33 |      75 |   51.55 |                   
  todoFormatter.ts |   51.55 |    83.33 |      75 |   51.55 | ...56-160,198-199 
 src/tools         |   72.53 |    76.34 |   76.69 |   72.53 |                   
  ...lFormatter.ts |     100 |      100 |     100 |     100 |                   
  ToolFormatter.ts |   20.89 |    76.19 |   33.33 |   20.89 | ...07,514-612,627 
  ...IdStrategy.ts |   91.94 |    88.88 |     100 |   91.94 | ...55-258,267-270 
  ast-edit.ts      |   46.57 |    56.52 |   56.96 |   46.57 | ...2376,2379-2498 
  codesearch.ts    |      98 |     87.5 |   85.71 |      98 | 110-111,173       
  ...line_range.ts |   84.68 |    67.56 |      70 |   84.68 | ...81-282,290-291 
  diffOptions.ts   |     100 |      100 |     100 |     100 |                   
  ...-web-fetch.ts |   93.18 |    72.41 |   77.77 |   93.18 | ...56,166-167,187 
  ...scapeUtils.ts |   61.65 |    72.97 |      50 |   61.65 | ...93,309,311-321 
  edit.ts          |   75.16 |    77.08 |   77.77 |   75.16 | ...94-795,813-855 
  ...web-search.ts |   97.91 |    85.71 |   83.33 |   97.91 | 126-127,191       
  ...y-replacer.ts |   85.71 |    84.35 |     100 |   85.71 | ...47-448,493-494 
  glob.ts          |   90.65 |    81.96 |      90 |   90.65 | ...62-263,368-369 
  ...-web-fetch.ts |   92.87 |    88.23 |    92.3 |   92.87 | ...82-383,393-394 
  ...invocation.ts |   54.74 |    38.88 |      75 |   54.74 | ...29-133,165-210 
  ...web-search.ts |     100 |      100 |     100 |     100 |                   
  grep.ts          |      60 |    78.19 |   73.68 |      60 | ...77-981,991-992 
  ...rt_at_line.ts |   81.55 |    76.08 |      70 |   81.55 | ...05-306,314-315 
  ...-subagents.ts |   87.28 |    69.56 |   88.88 |   87.28 | ...1,81-89,98,153 
  ls.ts            |    97.5 |    89.23 |     100 |    97.5 | 154-159           
  ...nt-manager.ts |   48.51 |    41.17 |   35.71 |   48.51 | ...13-314,320-325 
  mcp-client.ts    |   56.26 |    61.24 |   59.37 |   56.26 | ...1350,1354-1357 
  mcp-tool.ts      |   94.35 |    93.75 |   86.95 |   94.35 | ...50-260,322-323 
  memoryTool.ts    |   79.39 |    82.75 |    87.5 |   79.39 | ...55-356,399-440 
  ...iable-tool.ts |   98.34 |       80 |     100 |   98.34 | 167-168           
  read-file.ts     |   91.51 |    80.26 |    90.9 |   91.51 | ...34-235,403-404 
  ...many-files.ts |      73 |    77.92 |   88.88 |      73 | ...31-532,539-540 
  ...line_range.ts |    74.9 |     65.9 |      80 |    74.9 | ...50-351,355-356 
  ripGrep.ts       |   89.75 |    86.02 |    92.3 |   89.75 | ...47-448,469-470 
  shell.ts         |   84.96 |    80.22 |      90 |   84.96 | ...20-822,835-836 
  task.ts          |   80.65 |    69.04 |   92.85 |   80.65 | ...89,792,795-804 
  todo-events.ts   |    62.5 |      100 |       0 |    62.5 | 23-24,27-28,31-32 
  todo-pause.ts    |   87.09 |       80 |     100 |   87.09 | 64-69,73-78,93-98 
  todo-read.ts     |   89.24 |    94.73 |     100 |   89.24 | 113-124           
  todo-schemas.ts  |     100 |      100 |     100 |     100 |                   
  todo-store.ts    |   86.66 |       80 |     100 |   86.66 | 48-49,55-56,63-64 
  todo-write.ts    |   87.38 |    72.41 |    87.5 |   87.38 | ...75,210-212,271 
  ...tion-types.ts |     100 |      100 |     100 |     100 |                   
  tool-context.ts  |     100 |      100 |     100 |     100 |                   
  tool-error.ts    |      88 |      100 |       0 |      88 | 106-113           
  tool-names.ts    |     100 |      100 |     100 |     100 |                   
  tool-registry.ts |   82.39 |    72.89 |   81.08 |   82.39 | ...05-613,621-622 
  toolNameUtils.ts |      80 |     92.1 |     100 |      80 | 59-60,64-65,69-82 
  tools.ts         |    81.6 |    83.09 |   72.22 |    81.6 | ...29-830,833-837 
  write-file.ts    |   76.51 |    68.88 |   73.33 |   76.51 | ...93-594,616-655 
 src/types         |     100 |      100 |     100 |     100 |                   
  modelParams.ts   |     100 |      100 |     100 |     100 |                   
 src/utils         |   79.47 |    85.82 |   84.35 |   79.47 |                   
  LruCache.ts      |       0 |        0 |       0 |       0 | 1-41              
  bfsFileSearch.ts |   88.88 |       90 |     100 |   88.88 | 83-91             
  browser.ts       |    8.69 |      100 |       0 |    8.69 | 17-53             
  channel.ts       |     100 |      100 |     100 |     100 |                   
  delay.ts         |     100 |      100 |     100 |     100 |                   
  editor.ts        |   97.64 |    94.23 |     100 |   97.64 | 159,228,231-232   
  ...entContext.ts |     100 |      100 |     100 |     100 |                   
  errorParsing.ts  |      88 |    78.26 |     100 |      88 | ...07,249,252,258 
  ...rReporting.ts |   83.72 |    84.61 |     100 |   83.72 | 82-86,107-115     
  errors.ts        |   55.55 |    71.42 |   38.46 |   55.55 | ...92-108,112-118 
  events.ts        |     100 |      100 |     100 |     100 |                   
  ...sionLoader.ts |   94.23 |       68 |     100 |   94.23 | ...,60-61,111-112 
  fetch.ts         |    31.5 |    66.66 |      25 |    31.5 | ...37,40-84,87-88 
  fileUtils.ts     |   95.41 |    90.29 |     100 |   95.41 | ...35-239,451-457 
  formatters.ts    |   54.54 |       50 |     100 |   54.54 | 12-16             
  ...eUtilities.ts |   96.11 |       96 |     100 |   96.11 | 36-37,67-68       
  ...rStructure.ts |   95.96 |    94.93 |     100 |   95.96 | ...14-117,345-347 
  getPty.ts        |    12.5 |      100 |       0 |    12.5 | 21-34             
  ...noreParser.ts |   94.01 |    89.74 |     100 |   94.01 | ...06-307,319-320 
  ...ineChanges.ts |   58.56 |    79.41 |      80 |   58.56 | ...18-256,264-270 
  gitUtils.ts      |   90.24 |    90.47 |     100 |   90.24 | 40-41,71-72       
  googleErrors.ts  |    1.47 |      100 |       0 |    1.47 | 132-317           
  ...uotaErrors.ts |   98.08 |    79.54 |     100 |   98.08 | 61-62,204         
  ide-trust.ts     |      60 |      100 |       0 |      60 | 14-15             
  ...rePatterns.ts |     100 |    96.55 |     100 |     100 | 248               
  ...ionManager.ts |     100 |       90 |     100 |     100 | 23                
  ...edit-fixer.ts |       0 |        0 |       0 |       0 | 1-156             
  ...yDiscovery.ts |   85.77 |     78.5 |   84.61 |   85.77 | ...02-603,606-607 
  ...tProcessor.ts |    93.4 |    86.51 |    92.3 |    93.4 | ...87-388,397-398 
  ...Inspectors.ts |   61.53 |      100 |      50 |   61.53 | 18-23             
  output-format.ts |   36.36 |      100 |       0 |   36.36 | ...52-153,163-184 
  package.ts       |   18.18 |      100 |       0 |   18.18 | 18-28             
  ...erCoercion.ts |   83.78 |    81.15 |     100 |   83.78 | ...79-180,242-243 
  partUtils.ts     |     100 |      100 |     100 |     100 |                   
  pathReader.ts    |       0 |        0 |       0 |       0 | 1-60              
  paths.ts         |   86.99 |    86.58 |     100 |   86.99 | ...22-223,237-238 
  ...rDetection.ts |   57.62 |    63.15 |     100 |   57.62 | ...9,92-93,99-100 
  retry.ts         |   68.36 |    78.57 |   81.81 |   68.36 | ...17-620,625-626 
  ...thResolver.ts |   84.87 |    83.87 |     100 |   84.87 | ...06,129,178-181 
  ...nStringify.ts |     100 |      100 |     100 |     100 |                   
  sanitization.ts  |     100 |      100 |     100 |     100 |                   
  ...aValidator.ts |   83.52 |    82.75 |     100 |   83.52 | 70-81,125-126     
  ...r-launcher.ts |   78.57 |     87.5 |   66.66 |   78.57 | ...33,135,153-188 
  session.ts       |     100 |      100 |     100 |     100 |                   
  shell-markers.ts |     100 |      100 |     100 |     100 |                   
  shell-parser.ts  |   25.57 |    58.33 |   46.15 |   25.57 | ...23-337,344-374 
  shell-utils.ts   |   86.19 |     87.5 |     100 |   86.19 | ...28-430,433-438 
  summarizer.ts    |     100 |    88.88 |     100 |     100 | 92                
  ...emEncoding.ts |   97.14 |    91.42 |     100 |   97.14 | 108-109,161       
  testUtils.ts     |   53.33 |      100 |   33.33 |   53.33 | ...53,59-64,70-72 
  textUtils.ts     |    12.5 |      100 |       0 |    12.5 | 15-34             
  thoughtUtils.ts  |     100 |      100 |     100 |     100 |                   
  tool-utils.ts    |   82.47 |     77.5 |     100 |   82.47 | ...25-126,136-137 
  ...putLimiter.ts |   88.07 |    79.06 |     100 |   88.07 | ...22-227,271-278 
  unicodeUtils.ts  |     100 |      100 |     100 |     100 |                   
  ...untManager.ts |   91.96 |    88.23 |     100 |   91.96 | 37-39,76-78,94-96 
  ...aceContext.ts |   96.82 |    95.34 |    92.3 |   96.82 | 94-95,109-110     
 ...ils/filesearch |   96.18 |    91.26 |     100 |   96.18 |                   
  crawlCache.ts    |     100 |      100 |     100 |     100 |                   
  crawler.ts       |   96.22 |     92.3 |     100 |   96.22 | 66-67             
  fileSearch.ts    |   93.22 |    86.95 |     100 |   93.22 | ...26-227,229-230 
  ignore.ts        |     100 |      100 |     100 |     100 |                   
  result-cache.ts  |     100 |    91.66 |     100 |     100 | 46                
-------------------|---------|----------|---------|---------|-------------------

For detailed HTML reports, please see the 'coverage-reports-24.x-ubuntu-latest' artifact from the main CI run.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@packages/core/src/providers/openai/parseResponsesStream.ts`:
- Around line 96-120: The code merges both response.reasoning_text.delta and
response.reasoning_summary_text.delta into a single buffer (reasoningText)
causing raw reasoning and summarized reasoning to be combined; update the
handler in parseResponsesStream (the switch cases for
'response.reasoning_text.delta', 'response.reasoning_summary_text.delta',
'response.reasoning_text.done', and 'response.reasoning_summary_text.done') to
maintain two separate accumulators (e.g., reasoningText and
reasoningSummaryText) and on their respective *.done events yield distinct
thinking blocks using the appropriate buffer (or event.text fallback) before
clearing each buffer.
🧹 Nitpick comments (1)
packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts (1)

20-219: Good test coverage overall.

The test suite comprehensively covers the acceptance criteria from issue #922:

  • Reasoning-only streams ✓
  • Interleaved reasoning + text + tool calls ✓
  • Empty/whitespace reasoning chunks ✓
  • Reasoning emitted before usage metadata ✓

Consider adding a test for response.done (Codex variant) to explicitly verify parity with response.completed, though they share the same code path.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9d194bb and b32af10.

📒 Files selected for processing (2)
  • packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2025-12-18T14:06:22.557Z
Learning: OpenAIResponsesProvider (packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts) currently bypasses the ephemeral truncation system by using direct `JSON.stringify(toolResponseBlock.result)` and needs to be updated to support ephemeral settings like the other providers.
📚 Learning: 2025-12-18T14:06:22.557Z
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2025-12-18T14:06:22.557Z
Learning: OpenAIResponsesProvider (packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts) currently bypasses the ephemeral truncation system by using direct `JSON.stringify(toolResponseBlock.result)` and needs to be updated to support ephemeral settings like the other providers.

Applied to files:

  • packages/core/src/providers/openai/parseResponsesStream.ts
  • packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts
📚 Learning: 2025-11-16T22:51:26.374Z
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.

Applied to files:

  • packages/core/src/providers/openai/parseResponsesStream.ts
  • packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts
🧬 Code graph analysis (1)
packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts (1)
packages/core/src/providers/openai/parseResponsesStream.ts (1)
  • parseResponsesStream (52-240)
⏰ Context from checks skipped due to timeout of 270000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Test (macos-latest, 24.x)
  • GitHub Check: Test (windows-latest, 24.x)
  • GitHub Check: Test (ubuntu-latest, 24.x)
  • GitHub Check: E2E Test (Linux) - sandbox:docker
  • GitHub Check: E2E Test (Linux) - sandbox:none
  • GitHub Check: Slow E2E - Win
  • GitHub Check: E2E Test (macOS)
🔇 Additional comments (10)
packages/core/src/providers/openai/parseResponsesStream.ts (2)

104-120: LGTM!

The done event handling correctly:

  • Uses event.text as an override when provided (final complete text from API), falling back to accumulated reasoningText
  • Applies .trim() check to avoid emitting empty/whitespace-only thinking blocks
  • Properly resets the accumulator after yielding

190-205: LGTM!

Good defensive handling that flushes any pending reasoning content before emitting usage metadata. This correctly handles edge cases where reasoning delta events arrive but no explicit .done event follows before stream completion.

packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts (8)

4-18: LGTM!

Clean and reusable test helper for simulating SSE streams. The pull-based approach correctly mimics how chunks would arrive from a real network stream.


21-47: LGTM!

Comprehensive test for the basic reasoning flow. Good assertions on the accumulated thought content and sourceField.


49-88: LGTM!

Good integration test covering the interleaving scenario with reasoning, text output, and tool calls. The assertions verify all three content types are correctly parsed.


90-112: LGTM!

Good edge case coverage for empty/whitespace-only reasoning. The JSON-escaped \\n\\t correctly becomes actual whitespace characters after parsing.


114-141: LGTM!

Good test verifying that multiple reasoning deltas are correctly accumulated into a single thinking block rather than emitting multiple blocks.


143-172: LGTM!

Good integration test covering the full flow with reasoning, text, and usage metadata. Also verifies the cachedTokens default behavior.


174-196: LGTM!

Good test coverage for reasoning_summary_text event type, including verification that the text field from the done event is correctly used.


198-219: LGTM!

Critical test verifying the flush behavior when reasoning deltas arrive without an explicit done event before stream completion. This ensures the PR objective of emitting reasoning before usage metadata is met.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Fix all issues with AI agents
In `@packages/cli/src/runtime/runtimeSettings.reasoningSummary.test.ts`:
- Around line 18-27: The test checking that PROFILE_EPHEMERAL_KEYS includes all
reasoning.* keys is missing the 'reasoning.verbosity' key; update the test in
runtimeSettings.reasoningSummary.test.ts to add an expectation that
PROFILE_EPHEMERAL_KEYS contains 'reasoning.verbosity' (i.e., add
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.verbosity'); alongside the
other reasoning.* assertions) so the PROFILE_EPHEMERAL_KEYS coverage remains
complete.

In `@packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts`:
- Around line 790-795: In OpenAIResponsesProvider.ts update the block that
assigns request.tool_choice when responsesTools exist so it does not
unconditionally overwrite a user-provided value: check whether
request.tool_choice is already set (or non-empty) before assigning 'auto' (leave
user-specified values like 'required' or a function name intact), and continue
to set request.parallel_tool_calls = true as before; reference the local
variable request and responsesTools to locate the change and mirror the
conditional logic used in buildResponsesRequest.ts.

In `@packages/core/src/providers/openai/parseResponsesStream.ts`:
- Around line 196-246: The reasoning blocks can be emitted twice via
response.reasoning_text.done and response.output_item.done paths; add a Set
(e.g., emittedReasoningIds) scoped to this stream/parser to deduplicate by
event.item.id or event.item.sequence_number: when handling
response.output_item.done (the branch using
event.item.summary/event.item.content and variables
reasoningText/reasoningSummaryText) check if the item's id or sequence_number is
already in emittedReasoningIds before assembling/yielding the thinking block,
and on yielding add that id/sequence_number to the set; also consult the same
set in the response.reasoning_text.done path (which yields from accumulated
reasoningText) so it doesn't re-emit content for an id that was already emitted,
and ensure the fallback resets still occur but do not clear the deduplication
Set.

In `@packages/core/src/services/history/IContent.ts`:
- Around line 192-193: ContentValidation currently only treats the `thought`
field as content and will mark blocks with only `encryptedContent` as empty;
update the validation logic in ContentValidation to consider `encryptedContent`
(the IContent.encryptedContent property) as valid content as well so that
thinking blocks carrying only encryptedContent are not dropped during
processing. Locate the ContentValidation function/validator that references
`thought` and add a check that treats non-empty `encryptedContent` as
content-equivalent, ensuring any branches or emptiness checks use both `thought`
and `encryptedContent` when deciding to keep or drop a block.

In `@packages/vscode-ide-companion/NOTICES.txt`:
- Around line 36-40: The NOTICES.txt entry for `@hono/node-server`@1.19.9
currently shows "License text not found."; replace that placeholder in the
`@hono/node-server`@1.19.9 block with the full MIT license text exactly as
provided (the standard MIT permission grant, conditions, and disclaimer),
ensuring the complete license block replaces the missing text for the
`@hono/node-server`@1.19.9 entry so the package record no longer shows "License
text not found."
🧹 Nitpick comments (3)
packages/core/src/providers/openai/parseResponsesStream.ts (1)

95-112: Consider removing unused lastLoggedType tracking.

The lastLoggedType variable is assigned on lines 98-100 but never used for any conditional logic. If it was intended for deduplicating log output, the condition should actually use it to skip logging.

♻️ Suggested cleanup

Either remove the unused variable:

-            // SSE event visibility for debugging reasoning support.
-            // We log to stderr directly so it shows up in debug logs even if
-            // Track last logged type to avoid duplicate logs
-            if (event.type !== lastLoggedType) {
-              lastLoggedType = event.type;
-            }

Or use it to actually deduplicate logs:

             if (event.type !== lastLoggedType) {
               lastLoggedType = event.type;
+              logger.debug(() => `New event type: ${event.type}`);
             }
-
-            // Debug: Log ALL events with full details
-            logger.debug(() => `SSE event: type=${event.type}, ...`);
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (2)

593-607: Potential ID collision with Date.now() for reasoning items.

Using Date.now() to generate reasoning item IDs could produce duplicates if multiple thinking blocks are processed within the same millisecond. Consider using a more robust ID generation approach.

♻️ Suggested fix
+        let reasoningIdCounter = 0;
         // Add reasoning items if they have encrypted_content and reasoning should be included
         if (includeReasoningInContext) {
           for (const thinkingBlock of thinkingBlocks) {
             if (thinkingBlock.encryptedContent) {
               input.push({
                 type: 'reasoning',
-                id: `reasoning_${Date.now()}`,
+                id: `reasoning_${Date.now()}_${reasoningIdCounter++}`,
                 summary: [
                   { type: 'summary_text', text: thinkingBlock.thought },
                 ],
                 encrypted_content: thinkingBlock.encryptedContent,
               });
             }
           }
         }

Or use a random suffix similar to generateSyntheticCallId().


762-770: Reasoning items are added then immediately removed in Codex mode.

In Codex mode, reasoning items are added to input (lines 594-607) and then immediately filtered out (lines 762-764). Consider skipping the reasoning item addition when in Codex mode to avoid unnecessary work.

♻️ Suggested optimization

Move the Codex mode check earlier:

         // Add reasoning items if they have encrypted_content and reasoning should be included
-        if (includeReasoningInContext) {
+        // Skip reasoning items for Codex mode - they get filtered out later anyway
+        const baseURLForCheck = options.resolved.baseURL ?? this.getBaseURL();
+        const isCodexMode = this.isCodexMode(baseURLForCheck);
+        if (includeReasoningInContext && !isCodexMode) {
           for (const thinkingBlock of thinkingBlocks) {

Or simply document the current behavior if there's a reason for it.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7ca2ba9 and 4e3329d.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (17)
  • packages/cli/src/providers/aliases/codex.config
  • packages/cli/src/providers/providerAliases.codex.reasoningSummary.test.ts
  • packages/cli/src/runtime/runtimeSettings.reasoningSummary.test.ts
  • packages/cli/src/runtime/runtimeSettings.ts
  • packages/cli/src/settings/ephemeralSettings.reasoningSummary.test.ts
  • packages/cli/src/settings/ephemeralSettings.reasoningVerbosity.test.ts
  • packages/cli/src/settings/ephemeralSettings.ts
  • packages/cli/src/ui/commands/setCommand.test.ts
  • packages/cli/src/ui/commands/setCommand.ts
  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
  • packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningInclude.test.ts
  • packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningSummary.test.ts
  • packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.textVerbosity.test.ts
  • packages/core/src/providers/openai/openaiRequestParams.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
  • packages/core/src/services/history/IContent.ts
  • packages/vscode-ide-companion/NOTICES.txt
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.
📚 Learning: 2025-11-16T22:51:26.374Z
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.

Applied to files:

  • packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningSummary.test.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
📚 Learning: 2026-01-13T19:28:00.789Z
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2026-01-13T19:28:00.789Z
Learning: In the llxprt-code codebase (`packages/core/src/providers/anthropic/AnthropicProvider.ts`), Anthropic's API returns `contentBlock.input` as an already-parsed JavaScript object, not a JSON string. The code was incorrectly calling `JSON.stringify(contentBlock.input)` before passing it to `processToolParameters()`, which was designed for OpenAI-style string parameters. This causes arrays and other complex types to be corrupted into strings (e.g., `paths` array becomes a string `"[\"**/*.toml\"]"` instead of actual array). The fix is to use `contentBlock.input` directly without stringifying for Anthropic provider.

Applied to files:

  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
🧬 Code graph analysis (7)
packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningSummary.test.ts (2)
packages/core/src/index.ts (3)
  • setActiveProviderRuntimeContext (276-276)
  • createProviderRuntimeContext (274-274)
  • clearActiveProviderRuntimeContext (277-277)
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (1)
  • OpenAIResponsesProvider (70-1015)
packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.textVerbosity.test.ts (1)
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (1)
  • OpenAIResponsesProvider (70-1015)
packages/cli/src/providers/providerAliases.codex.reasoningSummary.test.ts (1)
packages/cli/src/providers/providerAliases.ts (1)
  • loadProviderAliasEntries (115-150)
packages/cli/src/settings/ephemeralSettings.reasoningSummary.test.ts (1)
packages/cli/src/settings/ephemeralSettings.ts (2)
  • ephemeralSettingHelp (9-107)
  • isValidEphemeralSetting (556-562)
packages/cli/src/settings/ephemeralSettings.reasoningVerbosity.test.ts (1)
packages/cli/src/settings/ephemeralSettings.ts (2)
  • parseEphemeralSettingValue (127-525)
  • ephemeralSettingHelp (9-107)
packages/cli/src/runtime/runtimeSettings.reasoningSummary.test.ts (1)
packages/cli/src/runtime/runtimeSettings.ts (1)
  • PROFILE_EPHEMERAL_KEYS (895-942)
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (1)
packages/core/src/services/history/IContent.ts (1)
  • ThinkingBlock (177-194)
🔇 Additional comments (30)
packages/core/src/providers/openai/openaiRequestParams.ts (1)

81-84: LGTM!

The filter correctly prevents the internal sentinel value 'none' from being sent to the API. The implementation follows the existing pattern and the comment clearly documents the intent.

packages/vscode-ide-companion/NOTICES.txt (3)

10-10: Version bump looks fine; verify notice matches upstream license.


1019-1019: qs version update OK; confirm license block matches the new version.


2272-2333: Added json-schema-typed notice looks complete; verify against upstream.

packages/cli/src/ui/commands/setCommand.test.ts (1)

110-118: Updated key list looks correct.

packages/cli/src/providers/aliases/codex.config (1)

8-11: Reasoning defaults look good for codex.

packages/cli/src/runtime/runtimeSettings.ts (1)

925-932: Key exposure update looks correct.

packages/cli/src/ui/commands/setCommand.ts (1)

315-335: New reasoning options are wired cleanly into /set.

packages/cli/src/settings/ephemeralSettings.reasoningSummary.test.ts (1)

1-70: Good coverage for reasoning.summary help + validation.

packages/cli/src/providers/providerAliases.codex.reasoningSummary.test.ts (1)

19-45: LGTM — solid coverage of codex.config reasoning defaults.

packages/cli/src/settings/ephemeralSettings.ts (1)

443-562: LGTM — validation and helper align with existing patterns.

packages/cli/src/settings/ephemeralSettings.reasoningVerbosity.test.ts (1)

13-70: LGTM — good coverage for reasoning.verbosity validation and help text.

packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningSummary.test.ts (1)

42-366: LGTM — request payload assertions look solid across summary modes.

packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.textVerbosity.test.ts (1)

44-352: LGTM — thorough coverage for text.verbosity behavior.

packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningInclude.test.ts (8)

1-56: LGTM! Well-structured test setup.

The test file has proper license headers, clear documentation of the test plan, and correctly manages mock state with beforeEach/afterEach hooks. The runtime context setup is appropriate for isolated testing.


58-118: LGTM! Good test coverage for reasoning.enabled flag.

The test correctly verifies that setting reasoning.enabled=true results in the include parameter being added to the request body.


120-183: LGTM! Validates alternative trigger for include parameter.

Good coverage ensuring include is added when only reasoning.effort is set without reasoning.enabled.


185-246: LGTM! Important negative test case.

Correctly validates that include is not added when reasoning is not explicitly requested.


248-318: LGTM! Validates internal field stripping.

Correctly ensures that internal settings (enabled, includeInContext, includeInResponse) are stripped from API requests while preserving valid API fields like effort.


321-399: LGTM! Comprehensive SSE parsing test.

Good coverage of parsing response.output_item.done events with reasoning type, verifying both the thinking block content and encrypted content preservation for round-trip.


401-472: LGTM! Tests delta accumulation correctly.

Validates that response.reasoning_summary_text.delta events are properly accumulated and yielded as a thinking block.


475-663: LGTM! Good coverage of context inclusion behavior.

Both tests correctly validate the reasoning.includeInContext setting - including encrypted content when true and stripping reasoning items when false.

packages/core/src/providers/openai/parseResponsesStream.ts (4)

10-12: LGTM! Good addition of debug logging.

The DebugLogger initialization with a specific namespace helps with debugging reasoning support issues.


14-50: LGTM! Interface extensions match API schema.

The extended ResponsesEvent interface properly accommodates the reasoning-related fields from the OpenAI Responses API.


126-172: LGTM! Separate buffer handling addresses previous review feedback.

The implementation correctly maintains separate accumulators for reasoning_text and reasoning_summary_text as recommended in past reviews, with proper buffer reset after yielding.


292-320: LGTM! Defensive flush of reasoning buffers on completion.

Good defensive measure to ensure any accumulated reasoning content is emitted before usage metadata, even if the stream ends without explicit done events.

packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (4)

29-35: LGTM! Proper import of ThinkingBlock type.

The ThinkingBlock import is correctly added to support reasoning block handling in the provider.


541-556: LGTM! ResponsesInputItem type correctly extended.

The reasoning variant matches the OpenAI Responses API expected format with type, id, summary array, and encrypted_content.


797-859: LGTM! Comprehensive reasoning settings handling.

The implementation correctly:

  • Adds include parameter when reasoning is enabled or effort is set
  • Adds reasoning.summary when configured
  • Validates and adds text.verbosity with proper lowercase normalization
  • Includes appropriate debug logging throughout

351-367: LGTM! Function signature updated to include reasoning type.

The type union correctly includes the new reasoning variant to match ResponsesInputItem.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines +18 to +27
it('should include all reasoning.* keys in PROFILE_EPHEMERAL_KEYS', () => {
// Verify all reasoning settings are saveable
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.enabled');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.includeInContext');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.includeInResponse');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.format');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.stripFromContext');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.effort');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.maxTokens');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.summary');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add reasoning.verbosity to the “all reasoning. keys” assertion.*

The test promises full reasoning coverage but omits reasoning.verbosity, so a regression could slip through unnoticed.

✅ Proposed fix
   expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.maxTokens');
   expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.summary');
+  expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.verbosity');
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
it('should include all reasoning.* keys in PROFILE_EPHEMERAL_KEYS', () => {
// Verify all reasoning settings are saveable
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.enabled');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.includeInContext');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.includeInResponse');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.format');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.stripFromContext');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.effort');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.maxTokens');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.summary');
it('should include all reasoning.* keys in PROFILE_EPHEMERAL_KEYS', () => {
// Verify all reasoning settings are saveable
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.enabled');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.includeInContext');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.includeInResponse');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.format');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.stripFromContext');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.effort');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.maxTokens');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.summary');
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.verbosity');
🤖 Prompt for AI Agents
In `@packages/cli/src/runtime/runtimeSettings.reasoningSummary.test.ts` around
lines 18 - 27, The test checking that PROFILE_EPHEMERAL_KEYS includes all
reasoning.* keys is missing the 'reasoning.verbosity' key; update the test in
runtimeSettings.reasoningSummary.test.ts to add an expectation that
PROFILE_EPHEMERAL_KEYS contains 'reasoning.verbosity' (i.e., add
expect(PROFILE_EPHEMERAL_KEYS).toContain('reasoning.verbosity'); alongside the
other reasoning.* assertions) so the PROFILE_EPHEMERAL_KEYS coverage remains
complete.

… summaries

Adds text.verbosity parameter to OpenAI Responses API requests to enable
thinking/reasoning summary output from GPT-5.x Codex models.

Changes:
- Add text.verbosity ephemeral setting (low/medium/high)
- Add reasoning.summary ephemeral setting (auto/concise/detailed/none)
- Both settings saveable via /profile save
- Add autocomplete support in /set command
- Send text: { verbosity } in request body per codex-rs implementation
- Send reasoning.summary in request body
- Add include: ['reasoning.encrypted_content'] when reasoning enabled
- Add encryptedContent field to ThinkingBlock for round-trip
- Enhanced SSE parsing with debug logging for reasoning events
- Update codex.config alias with default reasoning settings

Issue: #922
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@packages/core/src/providers/openai/parseResponsesStream.ts`:
- Around line 232-244: The yielded "thinking" block is missing the
encryptedContent field even though encryptedContent is logged earlier; inside
the generator that yields the AI thinking block (where thoughtText is used) add
encryptedContent: encryptedContent to the block object so the yielded payload
includes the preserved reasoning content (ensure you update the block in the
yield that constructs { type: 'thinking', thought: thoughtText, sourceField:
'reasoning_content' } to include encryptedContent).
♻️ Duplicate comments (4)
packages/vscode-ide-companion/NOTICES.txt (1)

36-40: Add the MIT license text for @hono/node-server.

The license text for @hono/[email protected] is still missing. This issue was previously flagged and the MIT License text was provided in earlier review comments.

packages/core/src/services/history/IContent.ts (1)

177-193: Encrypted-only thinking blocks may be dropped by validation.
Content validation still keys off thought only; encrypted-only reasoning can be treated as empty.

packages/core/src/providers/openai/parseResponsesStream.ts (1)

196-246: Potential duplicate reasoning block emissions remain unaddressed.

Per the past review, both response.reasoning_text.done (lines 140-155) and response.output_item.done with type=reasoning (lines 201-244) can emit thinking blocks for the same reasoning content. The buffer resets (lines 218-225) only prevent duplication when the fallback path is used, but if item.summary or item.content arrays are present in output_item.done, a thinking block is yielded regardless of whether reasoning_text.done already emitted.

Consider adding a Set<string> to track emitted reasoning by item.id to prevent yielding the same reasoning block twice.

packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (1)

790-795: User-specified tool_choice is still unconditionally overwritten.

Per the past review, this code should check if tool_choice was already provided (e.g., 'required' or a specific function) before defaulting to 'auto'. The current implementation overwrites any user-specified value.

     if (responsesTools && responsesTools.length > 0) {
       request.tools = responsesTools;
-      // Per codex-rs: always set tool_choice and parallel_tool_calls when tools are present
-      request.tool_choice = 'auto';
+      // Per codex-rs: set tool_choice when tools are present, respecting user-specified values
+      if (!request.tool_choice) {
+        request.tool_choice = 'auto';
+      }
       request.parallel_tool_calls = true;
     }
🧹 Nitpick comments (2)
packages/core/src/providers/openai/parseResponsesStream.ts (1)

95-112: Remove or use the lastLoggedType tracking variable.

The lastLoggedType variable is assigned (lines 98-100) but never used for any conditional logic. Either remove it or implement the intended deduplication behavior.

-            // SSE event visibility for debugging reasoning support.
-            // We log to stderr directly so it shows up in debug logs even if
-            // Track last logged type to avoid duplicate logs
-            if (event.type !== lastLoggedType) {
-              lastLoggedType = event.type;
-            }
-
             // Debug: Log ALL events with full details
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (1)

593-607: Consider using a more unique ID for reasoning items.

Date.now() (line 599) can produce duplicate IDs if multiple thinking blocks are processed within the same millisecond. Consider using a counter or random suffix similar to generateSyntheticCallId().

+        let reasoningIdCounter = 0;
         for (const thinkingBlock of thinkingBlocks) {
           if (thinkingBlock.encryptedContent) {
             input.push({
               type: 'reasoning',
-              id: `reasoning_${Date.now()}`,
+              id: `reasoning_${Date.now()}_${reasoningIdCounter++}`,
               summary: [
                 { type: 'summary_text', text: thinkingBlock.thought },
               ],
               encrypted_content: thinkingBlock.encryptedContent,
             });
           }
         }
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4e3329d and 7fed2d2.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (17)
  • packages/cli/src/providers/aliases/codex.config
  • packages/cli/src/providers/providerAliases.codex.reasoningSummary.test.ts
  • packages/cli/src/runtime/runtimeSettings.reasoningSummary.test.ts
  • packages/cli/src/runtime/runtimeSettings.ts
  • packages/cli/src/settings/ephemeralSettings.reasoningSummary.test.ts
  • packages/cli/src/settings/ephemeralSettings.textVerbosity.test.ts
  • packages/cli/src/settings/ephemeralSettings.ts
  • packages/cli/src/ui/commands/setCommand.test.ts
  • packages/cli/src/ui/commands/setCommand.ts
  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
  • packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningInclude.test.ts
  • packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningSummary.test.ts
  • packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.textVerbosity.test.ts
  • packages/core/src/providers/openai/openaiRequestParams.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
  • packages/core/src/services/history/IContent.ts
  • packages/vscode-ide-companion/NOTICES.txt
🚧 Files skipped from review as they are similar to previous changes (7)
  • packages/cli/src/ui/commands/setCommand.test.ts
  • packages/core/src/providers/openai/openaiRequestParams.ts
  • packages/core/src/providers/openai-responses/tests/OpenAIResponsesProvider.textVerbosity.test.ts
  • packages/cli/src/ui/commands/setCommand.ts
  • packages/cli/src/providers/aliases/codex.config
  • packages/cli/src/runtime/runtimeSettings.reasoningSummary.test.ts
  • packages/cli/src/settings/ephemeralSettings.reasoningSummary.test.ts
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.
📚 Learning: 2025-11-16T22:51:26.374Z
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.

Applied to files:

  • packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningSummary.test.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
📚 Learning: 2026-01-13T19:28:00.789Z
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2026-01-13T19:28:00.789Z
Learning: In the llxprt-code codebase (`packages/core/src/providers/anthropic/AnthropicProvider.ts`), Anthropic's API returns `contentBlock.input` as an already-parsed JavaScript object, not a JSON string. The code was incorrectly calling `JSON.stringify(contentBlock.input)` before passing it to `processToolParameters()`, which was designed for OpenAI-style string parameters. This causes arrays and other complex types to be corrupted into strings (e.g., `paths` array becomes a string `"[\"**/*.toml\"]"` instead of actual array). The fix is to use `contentBlock.input` directly without stringifying for Anthropic provider.

Applied to files:

  • packages/core/src/services/history/IContent.ts
  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
🧬 Code graph analysis (5)
packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningSummary.test.ts (1)
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (1)
  • OpenAIResponsesProvider (70-1015)
packages/cli/src/providers/providerAliases.codex.reasoningSummary.test.ts (1)
packages/cli/src/providers/providerAliases.ts (1)
  • loadProviderAliasEntries (115-150)
packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningInclude.test.ts (1)
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (1)
  • OpenAIResponsesProvider (70-1015)
packages/cli/src/settings/ephemeralSettings.textVerbosity.test.ts (1)
packages/cli/src/settings/ephemeralSettings.ts (2)
  • parseEphemeralSettingValue (127-525)
  • ephemeralSettingHelp (9-107)
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (1)
packages/core/src/services/history/IContent.ts (1)
  • ThinkingBlock (177-194)
🔇 Additional comments (29)
packages/vscode-ide-companion/NOTICES.txt (1)

10-10: LGTM!

The dependency version updates and the new [email protected] entry with complete BSD 2-Clause License text are properly documented.

Also applies to: 1019-1019, 2272-2333

packages/cli/src/settings/ephemeralSettings.textVerbosity.test.ts (1)

13-62: LGTM — solid coverage for text.verbosity parsing and help text.

packages/cli/src/providers/providerAliases.codex.reasoningSummary.test.ts (1)

19-45: LGTM — good safety net for codex alias reasoning defaults.

packages/cli/src/runtime/runtimeSettings.ts (1)

895-932: LGTM — new ephemeral keys are correctly surfaced for profile snapshots.

packages/cli/src/settings/ephemeralSettings.ts (3)

77-80: Help text additions look good.


443-469: Validation for reasoning.summary and text.verbosity looks correct.


556-561: Function is unused in production; concern about value coercion is theoretical but valid.

isValidEphemeralSetting is only tested, never called in production code. The stringification does lose type information (undefined"undefined", objects → "[object Object]"), but existing tests confirm rejection of non-string types works due to parseValue re-parsing strings. However, the current design is fragile: if this function is intended as a public API, consider either:

  • Removing String() coercion and accepting only string inputs (matching parseEphemeralSettingValue's contract)
  • Or explicitly documenting that typed inputs are deliberately stringified, with clear test coverage for edge cases like undefined and objects
packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningSummary.test.ts (1)

23-366: LGTM — comprehensive coverage for reasoning.summary request shaping.

packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningInclude.test.ts (8)

1-56: Well-structured test setup with proper isolation.

The test file has good organization with proper beforeEach/afterEach hooks for mock cleanup and runtime context management. The use of setActiveProviderRuntimeContext ensures proper isolation between tests.


58-118: LGTM!

Test correctly verifies that include: ["reasoning.encrypted_content"] is added when reasoning.enabled=true. The request body capture pattern and JSON parsing are appropriate.


120-183: LGTM!

Good coverage for the case where reasoning.effort triggers the include parameter without explicit reasoning.enabled. Verifying both the include parameter and the effort value in the request body is thorough.


185-246: LGTM!

Good negative test case ensuring the include parameter isn't added when reasoning isn't requested. This prevents unnecessary API overhead.


248-318: LGTM!

Critical test ensuring client-side settings (enabled, includeInContext, includeInResponse) are stripped from the API request while preserving API-relevant fields like effort. This prevents API errors from unrecognized fields.


322-399: LGTM!

Comprehensive test for parsing response.output_item.done events with reasoning type. Good verification of both the human-readable summary text and the encryptedContent preservation for round-trip context.


401-472: LGTM!

Good test for delta accumulation from response.reasoning_summary_text.delta events. Verifies both parts of the streamed reasoning are accumulated and appear in the final thinking block.


475-662: LGTM!

Excellent coverage for the reasoning context round-trip behavior. Tests correctly verify that encrypted_content is included in subsequent requests when includeInContext=true and stripped when false. This is essential for maintaining reasoning continuity across conversation turns.

packages/core/src/providers/openai/parseResponsesStream.ts (5)

10-12: LGTM!

Good addition of DebugLogger for reasoning event tracing and separate accumulators for reasoningText and reasoningSummaryText. This addresses the past review comment about tracking them separately.

Also applies to: 67-70


14-50: LGTM!

Type extensions correctly model the OpenAI Responses API reasoning event schema with text, content_index, summary_index, and the summary/content/encrypted_content fields on items.


126-138: LGTM!

Delta accumulation correctly uses separate buffers for reasoning_text and reasoning_summary_text events, addressing the past review concern about merging distinct content types.


140-172: LGTM!

The *.done handlers correctly yield thinking blocks using either the final event.text or accumulated buffer, then reset the buffer. The consistent use of sourceField: 'reasoning_content' maintains compatibility with round-trip serialization.


292-320: LGTM!

Good safety net to flush any remaining accumulated reasoning before emitting usage metadata. This ensures reasoning content isn't lost if the stream ends without explicit *.done events.

packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (8)

29-35: LGTM!

Good addition of ThinkingBlock import to support reasoning block handling in the provider.


541-556: LGTM!

The ResponsesInputItem type extension correctly models the reasoning input format for the Responses API, enabling round-trip reasoning content with summary and encrypted_content fields.


567-569: LGTM!

Good default behavior allowing reasoning in context unless explicitly disabled. The !== false check handles undefined/null gracefully.


797-819: LGTM!

The reasoning include parameter logic correctly computes shouldRequestReasoning based on reasoning.enabled or reasoning.effort, and adds the appropriate include parameter. Debug logging provides good observability.


821-835: LGTM!

Good handling of reasoning.summary with proper validation and safe object creation. Skipping 'none' values prevents unnecessary API parameters.


842-859: LGTM!

Good validation of text.verbosity against allowed values (low, medium, high) with case-insensitive matching. The structure aligns with the OpenAI Responses API format.


351-367: LGTM!

The updated type signature correctly includes the reasoning variant, maintaining consistency with the ResponsesInputItem union type.


762-770: Add clarifying comment: Reasoning items are permanently excluded from Codex requests without re-addition.

Lines 762-770 filter reasoning items before synthetic config injection, but they are never re-added to the final request. While reasoning is configured via the include and request.reasoning parameters, the permanent removal of reasoning items from the input array is not documented. Clarify whether this is intentional (e.g., Codex API limitation) or if reasoning items from context should be re-added after synthetic injection.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

…d deduplicate thinking blocks

- Add includeThinkingInResponse option to parseResponsesStream to filter thinking blocks at parse time
- Pass reasoning.includeInResponse setting from OpenAIResponsesProvider to parseResponsesStream
- Fix thinking block duplication in interactive mode by:
  - Resetting thinkingBlocksRef at start of each stream (useGeminiStream)
  - Deduplicating thought events by content (useGeminiStream)
  - Only showing thinking blocks when item is not pending (GeminiMessage)
- Add emittedThoughts Set to deduplicate across all emission paths in parseResponsesStream
- Non-interactive mode already works correctly with consolidated buffer approach

Fixes #922
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (1)

568-605: Reasoning item IDs can collide within a request.

Date.now() can return the same value for multiple thinking blocks in the same loop, producing duplicate ids in the input array. That can cause items to overwrite or be rejected. Use a per-request counter or UUID.

🔧 Proposed fix (monotonic counter)
-    const includeReasoningInContext =
-      options.settings?.get('reasoning.includeInContext') !== false;
+    const includeReasoningInContext =
+      options.settings?.get('reasoning.includeInContext') !== false;
+    let reasoningCounter = 0;
...
-              input.push({
-                type: 'reasoning',
-                id: `reasoning_${Date.now()}`,
+              input.push({
+                type: 'reasoning',
+                id: `reasoning_${Date.now()}_${reasoningCounter++}`,
🤖 Fix all issues with AI agents
In `@packages/cli/src/nonInteractiveCli.test.ts`:
- Around line 749-847: The two it.skip test blocks are declared outside the
describe('runNonInteractive') scope so they reference out-of-scope fixtures
(mockGeminiClient, mockConfig, createStreamFromEvents, processStdoutSpy) and
cause TypeScript failures; move those it.skip blocks into the existing
describe('runNonInteractive') block (or create a new describe with the same
beforeEach/fixtures) so they can access mockGeminiClient, mockConfig,
createStreamFromEvents, and processStdoutSpy, ensuring the tests use the same
setup/teardown as the other tests.

In `@packages/cli/src/nonInteractiveCli.ts`:
- Around line 171-189: The flushThoughtBuffer currently emits raw <think> tags
when includeThinking is true even in STREAM_JSON mode; update the logic that
computes includeThinking (and/or the flushThoughtBuffer path) to also check that
streamJsonOutput is false (i.e., only emit thinking when !streamJsonOutput), or
route thinking through the StreamJsonFormatter when streamJsonOutput is true;
modify the evaluation of includeThinking (which currently depends on jsonOutput
and config.getEphemeralSetting) and/or guard flushThoughtBuffer so that
thoughtBuffer is never written directly to stdout in STREAM_JSON mode.

In `@packages/cli/src/ui/hooks/useGeminiStream.ts`:
- Around line 951-1010: Trim event.value.subject and event.value.description
before composing thoughtContent in the ServerGeminiEventType.Thought handler
(use the existing symbols: event.value, thinkingBlocksRef, setThought,
ThinkingBlock, setPendingHistoryItem) and if the trimmed subject+description
result is empty/whitespace, skip creating/adding the ThinkingBlock and do not
call setThought or update setPendingHistoryItem; otherwise proceed as before
using the trimmed values to build thoughtContent and add the block.

In `@packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts`:
- Around line 918-919: The debug line in OpenAIResponsesProvider that calls
this.logger.debug(() => `Request body FULL: ${requestBody}`) must not print raw
requestBody (PII/secret risk); replace it with a redacted summary by creating or
calling a sanitizer (e.g., sanitizeRequestBody/redactRequestBody) that strips
sensitive fields (prompts, encryptedReasoning, keys) or returns only top-level
keys/lengths, then log that sanitized summary via this.logger.debug; ensure
references to requestBody in the log use the sanitized output and keep the
original requestBody unmodified.

In `@packages/core/src/providers/openai/parseResponsesStream.ts`:
- Around line 196-246: The reasoning blocks are currently skipped when
includeThinkingInResponse is false; instead, modify the handlers for the event
cases 'response.reasoning_text.done' and 'response.reasoning_summary_text.done'
(and similarly for the output_item.done and response.done fallback handlers) to
still yield the thinking block but with isHidden: true when
includeThinkingInResponse is false, and preserve encryptedContent if present;
i.e., where the code now gates on includeThinkingInResponse to decide to emit,
always add a yield that sets isHidden: !includeThinkingInResponse (or only sets
isHidden when false) while keeping emittedThoughts checks, thought content
(reasoningText / reasoningSummaryText / output item text), sourceField, and any
encryptedContent, then clear the accumulated buffers (reasoningText,
reasoningSummaryText, etc.) as before.
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 7fed2d2 and 37318d5.

📒 Files selected for processing (11)
  • packages/cli/src/nonInteractiveCli.test.ts
  • packages/cli/src/nonInteractiveCli.ts
  • packages/cli/src/settings/ephemeralSettings.textVerbosity.test.ts
  • packages/cli/src/ui/commands/setCommand.ts
  • packages/cli/src/ui/components/messages/GeminiMessage.tsx
  • packages/cli/src/ui/hooks/useGeminiStream.thinking.test.tsx
  • packages/cli/src/ui/hooks/useGeminiStream.ts
  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
  • packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningInclude.test.ts
  • packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/cli/src/settings/ephemeralSettings.textVerbosity.test.ts
  • packages/cli/src/ui/commands/setCommand.ts
🧰 Additional context used
🧠 Learnings (3)
📓 Common learnings
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.
📚 Learning: 2025-11-16T22:51:26.374Z
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.

Applied to files:

  • packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts
  • packages/cli/src/nonInteractiveCli.test.ts
  • packages/cli/src/ui/hooks/useGeminiStream.ts
  • packages/cli/src/nonInteractiveCli.ts
  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
📚 Learning: 2026-01-13T19:28:00.789Z
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2026-01-13T19:28:00.789Z
Learning: In the llxprt-code codebase (`packages/core/src/providers/anthropic/AnthropicProvider.ts`), Anthropic's API returns `contentBlock.input` as an already-parsed JavaScript object, not a JSON string. The code was incorrectly calling `JSON.stringify(contentBlock.input)` before passing it to `processToolParameters()`, which was designed for OpenAI-style string parameters. This causes arrays and other complex types to be corrupted into strings (e.g., `paths` array becomes a string `"[\"**/*.toml\"]"` instead of actual array). The fix is to use `contentBlock.input` directly without stringifying for Anthropic provider.

Applied to files:

  • packages/cli/src/ui/hooks/useGeminiStream.ts
  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
🧬 Code graph analysis (4)
packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts (2)
packages/core/src/services/history/IContent.ts (1)
  • IContent (21-40)
packages/core/src/providers/openai/parseResponsesStream.ts (1)
  • parseResponsesStream (89-462)
packages/cli/src/nonInteractiveCli.test.ts (2)
packages/core/src/core/turn.ts (1)
  • ServerGeminiStreamEvent (222-239)
packages/cli/src/nonInteractiveCli.ts (1)
  • runNonInteractive (35-437)
packages/cli/src/ui/hooks/useGeminiStream.ts (2)
packages/core/src/integration-tests/geminiChat-isolation.integration.test.ts (1)
  • stream (101-111)
packages/core/src/services/history/IContent.ts (1)
  • ThinkingBlock (177-194)
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (2)
packages/core/src/services/history/IContent.ts (1)
  • ThinkingBlock (177-194)
packages/core/src/providers/openai/parseResponsesStream.ts (2)
  • ParseResponsesStreamOptions (80-87)
  • parseResponsesStream (89-462)
🔇 Additional comments (13)
packages/cli/src/ui/hooks/useGeminiStream.thinking.test.tsx (3)

175-194: Mock aligns with updated useStateAndRef behavior.
Looks good and mirrors the hook’s contract cleanly.


399-455: Replacement semantics test is clear and focused.
Good coverage for overwrite vs. append behavior.


507-546: Pending thinking blocks coverage looks solid.
Nice check that pending items surface thinking before content arrives.

packages/cli/src/ui/components/messages/GeminiMessage.tsx (1)

47-50: Pending-item suppression avoids duplicate thinking output.
This guard matches the streaming UX expectations.

Also applies to: 59-60

packages/cli/src/ui/hooks/useGeminiStream.ts (2)

665-685: Streaming updates now preserve thinking blocks correctly.
Keeps pending history consistent while content is streaming.


1107-1107: Dependency list update is correct.
Including setPendingHistoryItem avoids stale closures.

packages/cli/src/nonInteractiveCli.ts (1)

203-219: Thought buffering/flush ordering looks solid.

Flushing buffered thoughts right before content/tool calls and at stream end keeps output order stable.

Also applies to: 260-262, 302-302

packages/cli/src/nonInteractiveCli.test.ts (1)

196-226: Good coverage for coalesced thought output.

The test validates that buffered thoughts are emitted once before content, matching the new buffering behavior.

packages/core/src/providers/openai/parseResponsesStream.ts (1)

63-194: Reasoning delta buffering and spacing helper look solid.

appendReasoningDelta and separate handling for reasoning vs summary deltas keep output readable and deterministic.

packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts (1)

1-310: Reasoning stream test coverage is strong.

The suite exercises interleaving, empty deltas, spacing, dedupe, and usage ordering in a way that matches the new parser behavior.

packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (2)

763-771: Confirm Codex should drop reasoning input items.

In Codex mode, reasoning items are filtered out entirely before the synthetic config injection. If Codex accepts reasoning input items, this blocks reasoning.includeInContext from working. Please confirm with Codex /responses behavior (or add a targeted test) that this is intended.


997-1006: Passing includeThinkingInResponse into the parser is a good alignment.

This keeps streaming output consistent with the runtime setting.

packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningInclude.test.ts (1)

1-660: Great coverage for reasoning include/encrypted_content flows.

These tests exercise request shaping, SSE parsing, and context reinjection across the key reasoning settings.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

- Normalize thought subject/description by trimming whitespace
- Skip creating thinking blocks when subject === description (prevents
  'Preparing: Preparing' style duplication)
- Skip empty/whitespace-only thoughts entirely
- Prevent thinking output in STREAM_JSON mode (would corrupt JSON stream)

Fixes #922
- OpenAIResponsesProvider: respect user-specified tool_choice (don't override)
- OpenAIResponsesProvider: fix reasoning item ID collision with counter
- OpenAIResponsesProvider: redact PII from debug logs (keys-only summary)
- parseResponsesStream: emit hidden reasoning when includeThinkingInResponse=false
  (preserves encrypted_content for round-trip while hiding UI display)
- IContent: treat encryptedContent as valid content for thinking blocks
- nonInteractiveCli.test: move skipped tests inside describe scope
- runtimeSettings.test: add text.verbosity assertion
- NOTICES.txt: add MIT license text for @hono/node-server

Issue: #922
PR: #1156
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts`:
- Around line 597-611: When adding reasoning items in OpenAIResponsesProvider
(inside the includeReasoningInContext block that iterates thinkingBlocks), guard
against undefined/empty thinkingBlock.thought before inserting a summary_text
entry: either skip adding a summary element if thinkingBlock.thought is falsy or
substitute a safe fallback (e.g., an empty string) so you never create { type:
'summary_text', text: undefined }; update the loop that builds the summary array
for each reasoning_{...} entry to check thinkingBlock.thought and only include
valid summary objects.
🧹 Nitpick comments (1)
packages/core/src/providers/openai/parseResponsesStream.ts (1)

134-162: Consider gating verbose debug logging behind a flag or reducing frequency.

The debug logging is helpful for development but may generate significant log volume in production. The per-event logging (lines 142-154) and reasoning-specific logging (lines 147-162) could impact performance with high-throughput streams.

♻️ Optional: Add a verbose flag or use trace level

Consider either:

  1. Adding a verbose option to ParseResponsesStreamOptions to control detailed logging
  2. Using a more granular log level (e.g., trace) for per-event logs while keeping reasoning logs at debug

This is not blocking since debug logs are typically disabled in production.

📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8c191ae and 268aa73.

📒 Files selected for processing (6)
  • packages/cli/src/nonInteractiveCli.test.ts
  • packages/cli/src/runtime/runtimeSettings.reasoningSummary.test.ts
  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
  • packages/core/src/services/history/IContent.ts
  • packages/vscode-ide-companion/NOTICES.txt
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/cli/src/nonInteractiveCli.test.ts
  • packages/cli/src/runtime/runtimeSettings.reasoningSummary.test.ts
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2026-01-13T19:28:00.789Z
Learning: In the llxprt-code codebase (`packages/core/src/providers/anthropic/AnthropicProvider.ts`), Anthropic's API returns `contentBlock.input` as an already-parsed JavaScript object, not a JSON string. The code was incorrectly calling `JSON.stringify(contentBlock.input)` before passing it to `processToolParameters()`, which was designed for OpenAI-style string parameters. This causes arrays and other complex types to be corrupted into strings (e.g., `paths` array becomes a string `"[\"**/*.toml\"]"` instead of actual array). The fix is to use `contentBlock.input` directly without stringifying for Anthropic provider.
📚 Learning: 2026-01-13T19:28:00.789Z
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2026-01-13T19:28:00.789Z
Learning: In the llxprt-code codebase (`packages/core/src/providers/anthropic/AnthropicProvider.ts`), Anthropic's API returns `contentBlock.input` as an already-parsed JavaScript object, not a JSON string. The code was incorrectly calling `JSON.stringify(contentBlock.input)` before passing it to `processToolParameters()`, which was designed for OpenAI-style string parameters. This causes arrays and other complex types to be corrupted into strings (e.g., `paths` array becomes a string `"[\"**/*.toml\"]"` instead of actual array). The fix is to use `contentBlock.input` directly without stringifying for Anthropic provider.

Applied to files:

  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
  • packages/core/src/services/history/IContent.ts
📚 Learning: 2025-11-16T22:51:26.374Z
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.

Applied to files:

  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
📚 Learning: 2025-12-14T16:16:39.939Z
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2025-12-14T16:16:39.939Z
Learning: MiniMax M2 model (hf:MiniMax/MiniMax-M2) requires reasoning.stripFromContext set to "none" to preserve interleaved thinking tokens in conversation history, as dropping thinking content harms agent reliability.

Applied to files:

  • packages/core/src/providers/openai/parseResponsesStream.ts
🧬 Code graph analysis (2)
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (2)
packages/core/src/services/history/IContent.ts (1)
  • ThinkingBlock (177-194)
packages/core/src/providers/openai/parseResponsesStream.ts (2)
  • ParseResponsesStreamOptions (80-87)
  • parseResponsesStream (89-457)
packages/core/src/providers/openai/parseResponsesStream.ts (1)
packages/core/src/services/history/IContent.ts (2)
  • IContent (21-40)
  • ContentBlock (93-99)
🔇 Additional comments (25)
packages/vscode-ide-companion/NOTICES.txt (4)

10-10: LGTM: Dependency version update.

The @modelcontextprotocol/sdk version update from 1.24.3 to 1.25.2 is properly documented with complete MIT license text.


36-59: LGTM: License text issue resolved.

The complete MIT license text for @hono/node-server@1.19.9 is now properly included, addressing the previous review concern.


1037-1037: LGTM: Dependency patch update.

The qs library patch version update from 6.14.0 to 6.14.1 is properly documented with complete BSD 3-Clause license text.


2290-2351: LGTM: Comprehensive license attribution.

The [email protected] entry includes thorough BSD 2-Clause license text with detailed copyright attributions covering both the library source code and JSON Schema specification documentation across multiple draft versions.

packages/core/src/services/history/IContent.ts (2)

191-193: LGTM! New encryptedContent field for round-trip reasoning support.

The addition of the encryptedContent field to ThinkingBlock enables preservation of OpenAI's encrypted reasoning content for stateless round-trip contexts, aligning with the PR objectives.


235-241: LGTM! Validation logic correctly treats encrypted-only blocks as valid.

The updated hasContent check properly considers encryptedContent as valid content, ensuring encrypted-only thinking blocks are preserved for round-trip reasoning even when thought is empty.

packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (11)

32-35: LGTM! Added necessary imports for reasoning support.

The ThinkingBlock import enables proper typing for reasoning block handling in this provider.


362-367: LGTM! Extended input union to support reasoning items.

The reasoning type addition to the input union matches the OpenAI Responses API schema for reasoning items with id, summary, and encrypted_content fields.


551-557: LGTM! ResponsesInputItem union includes reasoning variant.

This correctly models the Responses API input types including the new reasoning variant needed for round-trip context.


568-573: LGTM! Reasoning context configuration with per-request ID counter.

The includeReasoningInContext setting and reasoningIdCounter provide proper control over reasoning inclusion and unique ID generation within a request.


766-774: LGTM! Defensive filtering of reasoning items for Codex synthetic injection.

Correctly filters out reasoning items before injecting synthetic config file read, preventing potential issues with the Codex path.


796-801: LGTM! Correctly respects user-specified tool_choice.

The conditional check now only defaults to 'auto' when tool_choice is not already set, preserving user-specified values like 'required' or specific function names.


804-830: LGTM! Reasoning configuration for request building.

The logic correctly derives shouldRequestReasoning and adds include parameter for encrypted content when reasoning is enabled. The debug logging helps with troubleshooting.


832-846: LGTM! Reasoning summary configuration.

Correctly adds reasoning.summary to the request when set and not 'none', matching the codex-rs implementation pattern.


854-871: LGTM! Text verbosity support for Responses API.

The text.verbosity field is correctly validated and added to the request when set to a valid value (low, medium, high).


924-927: LGTM! Debug logging now redacts sensitive data.

The debug log now only outputs request keys rather than the full body, addressing the previous review concern about PII/secret exposure.


1006-1014: LGTM! Stream options passed to parseResponsesStream.

The includeThinkingInResponse option is correctly passed to the stream parser, ensuring reasoning blocks respect the user's visibility preference.

packages/core/src/providers/openai/parseResponsesStream.ts (8)

8-15: LGTM! Updated imports for reasoning support.

The addition of ContentBlock import and DebugLogger enables proper typing and debugging for reasoning block handling.


17-53: LGTM! Extended ResponsesEvent type for reasoning fields.

The additions of text, content_index, summary_index, and the extended item shape (with summary, content, encrypted_content arrays) correctly model the OpenAI Responses API reasoning events.


63-75: LGTM! Helper for concatenating reasoning deltas with smart spacing.

The appendReasoningDelta function handles edge cases well—returning early for empty values and inserting a space only when needed between word characters and parentheses.


77-107: LGTM! Stream options and deduplication tracking.

The ParseResponsesStreamOptions interface with includeThinkingInResponse and the emittedThoughts Set provide proper configuration and deduplication for reasoning blocks.


176-194: LGTM! Delta accumulation for reasoning text and summary.

The handlers correctly accumulate deltas using appendReasoningDelta and keep reasoningText and reasoningSummaryText as separate buffers.


266-333: LGTM! Comprehensive reasoning handling in output_item.done.

The handler correctly:

  • Extracts thought text from summary and content arrays with fallback to accumulated buffers
  • Deduplicates against emittedThoughts
  • Includes encryptedContent when present
  • Sets isHidden based on includeThinkingInResponse
  • Clears buffers after processing

379-421: LGTM! Fallback emission for remaining reasoning on stream completion.

The response.completed/response.done handler correctly emits any accumulated reasoning that wasn't emitted via other event paths, with proper deduplication and isHidden flag.


196-242: Reasoning events do not include encrypted_content in .done events.

The OpenAI Responses API does not include encrypted_content in response.reasoning_text.done or response.reasoning_summary_text.done events. The encrypted_content field is only available on the reasoning item object itself (retrieved separately via include=["reasoning.encrypted_content"]), not on the streaming event payloads. The current code is correct and does not need modification.

Likely an incorrect or invalid review comment.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

When building reasoning items for the Responses API context, ensure
thinkingBlock.thought has a fallback to empty string to avoid creating
invalid { type: 'summary_text', text: undefined } objects.

Addresses CodeRabbit review #3720508548
Integrate main branch features while preserving issue #922 fixes:
- Add settings registry with auto-validation for enum/boolean/number types
- Fix reasoning.summary='none' handling to exclude from API requests
- Add reasoning.effort to request body when reasoning is enabled
- Filter 'reasoning' object from modelParams to prevent override
- Update pending history item with thinking blocks during streaming
- Add setPendingHistoryItem to useCallback dependency array

Key changes for #922:
- reasoning.summary and text.verbosity settings in registry
- codex.config includes reasoning.summary=auto and reasoning.effort=medium
- OpenAIResponsesProvider reads reasoning settings from model-behavior
- ThinkingBlocks visible in pendingHistoryItems during streaming
…lation

The thinkingBlocksRef was accumulating across multiple tool calls within
the same turn. When each chunk was committed to history via
flushPendingHistoryItem, the thinking blocks from previous chunks would
be included again, causing the pyramid effect where each message showed
all previous thoughts plus the current one.

Fix: Clear thinkingBlocksRef.current = [] immediately after committing
an item to history. This ensures each history item only contains the
thinking blocks that occurred since the last commit.

Fixes #922
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/core/src/providers/openai/parseResponsesStream.ts (1)

98-333: Encrypted content can be lost when response.reasoning_text.done precedes response.output_item.done with the same reasoning text.

Per OpenAI's Responses API, both event types can carry the same reasoning text content, but encrypted_content appears only on the reasoning output item (in response.output_item.done). Since events can arrive out of order, if reasoning_text.done emits first and adds the text to emittedThoughts, the subsequent output_item.done is skipped entirely—dropping the encrypted_content needed for stateless workflows (ZDR/store: false).

Use a Map to track whether encrypted content has been captured per thought, allowing a hidden re-emit on first arrival of encrypted_content:

Proposed fix
-  const emittedThoughts = new Set<string>();
+  const emittedThoughts = new Map<string, { hasEncrypted: boolean }>();

In response.reasoning_text.done and response.reasoning_summary_text.done handlers, replace emittedThoughts.add(...) with emittedThoughts.set(thoughtContent, { hasEncrypted: false }).

In response.output_item.done handler for reasoning items, check:

const prior = emittedThoughts.get(finalThought);
const hasEncryptedContent = Boolean(event.item?.encrypted_content);
const shouldEmit = finalThought && (!prior || (hasEncryptedContent && !prior.hasEncrypted));
if (shouldEmit) {
  const shouldHide = !includeThinkingInResponse || Boolean(prior);
  // ... emit with isHidden: shouldHide ...
  emittedThoughts.set(finalThought, {
    hasEncrypted: Boolean(prior?.hasEncrypted) || hasEncryptedContent,
  });
}

Add a test covering the scenario where reasoning_text.done precedes output_item.done with encrypted_content.

🤖 Fix all issues with AI agents
In `@packages/cli/src/nonInteractiveCli.test.ts`:
- Around line 812-860: The test "should NOT emit pyramid-style repeated prefixes
in non-interactive CLI" is missing the ephemeral setting mock; before calling
runNonInteractive, add the same ephemeral stub used in the other tests by
configuring mockSettings (e.g., set mockSettings.ephemeral or stub
mockSettings.get('ephemeral')/mockSettings.getSetting to return the same value
used in the active test) so runNonInteractive and its use of mockSettings behave
consistently with the other tests that already include the ephemeral mock.
- Around line 762-810: The skipped test "should accumulate multiple Thought
events and flush once on content boundary" is missing a mock for
getEphemeralSetting to enable ephemeral reasoning output and its assertions
don't match how thought subjects are concatenated; before calling
runNonInteractive mock getEphemeralSetting to return true for the reasoning
setting (same setup used in the active test) so thinking output is produced, and
update the assertions against processStdoutSpy to match the actual thoughtText
construction used by the code (e.g., check for the concatenated subject format
produced by the Thought events from GeminiEventType.Thought rather than "First
thought"/"Second thought").

In `@packages/cli/src/settings/ephemeralSettings.ts`:
- Around line 70-75: isValidEphemeralSetting currently checks the raw key
against validEphemeralKeys before parsing, so alias keys (e.g., "max-tokens")
are rejected; modify isValidEphemeralSetting to first resolve aliases by calling
the same alias-resolution logic used in parseEphemeralSettingValue (i.e., derive
the canonical key from the input key) and then check that canonical key against
validEphemeralKeys and call parseEphemeralSettingValue using the canonical key
so validation behavior matches parseEphemeralSettingValue; update references to
validEphemeralKeys, parseEphemeralSettingValue, and isValidEphemeralSetting
accordingly.

In `@packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts`:
- Around line 560-563: The code reads reasoning.includeInContext /
reasoning.includeInResponse / reasoning.enabled only from SettingsService (via
options.settings), ignoring per-invocation ephemerals or modelBehavior; update
the logic in OpenAIResponsesProvider (around where includeReasoningInContext is
computed) to first check any invocation-level overrides (e.g.,
options.ephemeral, options.modelBehavior or options.invocationModelBehavior) and
the modelBehavior fetchers for reasoning flags, and only if those are undefined
fall back to options.settings?.get('reasoning...'); apply the same precedence
for all three flags (includeInContext, includeInResponse, enabled) and the other
occurrence block later (the region referenced at 796-810) so ephemeral /
per-call overrides take precedence over SettingsService.
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2920a5e and e0b23fc.

📒 Files selected for processing (22)
  • packages/cli/src/nonInteractiveCli.test.ts
  • packages/cli/src/nonInteractiveCli.ts
  • packages/cli/src/providers/aliases/codex.config
  • packages/cli/src/providers/providerAliases.codex.reasoningSummary.test.ts
  • packages/cli/src/runtime/runtimeSettings.reasoningSummary.test.ts
  • packages/cli/src/runtime/runtimeSettings.ts
  • packages/cli/src/settings/ephemeralSettings.reasoningSummary.test.ts
  • packages/cli/src/settings/ephemeralSettings.textVerbosity.test.ts
  • packages/cli/src/settings/ephemeralSettings.ts
  • packages/cli/src/ui/commands/setCommand.ts
  • packages/cli/src/ui/components/messages/GeminiMessage.tsx
  • packages/cli/src/ui/hooks/useGeminiStream.thinking.test.tsx
  • packages/cli/src/ui/hooks/useGeminiStream.ts
  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
  • packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningInclude.test.ts
  • packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningSummary.test.ts
  • packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.textVerbosity.test.ts
  • packages/core/src/providers/openai/openaiRequestParams.ts
  • packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
  • packages/core/src/services/history/IContent.ts
  • packages/core/src/settings/settingsRegistry.ts
🚧 Files skipped from review as they are similar to previous changes (5)
  • packages/cli/src/providers/aliases/codex.config
  • packages/cli/src/ui/components/messages/GeminiMessage.tsx
  • packages/core/src/providers/openai-responses/tests/OpenAIResponsesProvider.textVerbosity.test.ts
  • packages/cli/src/runtime/runtimeSettings.reasoningSummary.test.ts
  • packages/cli/src/providers/providerAliases.codex.reasoningSummary.test.ts
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.
📚 Learning: 2026-01-13T19:28:00.789Z
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2026-01-13T19:28:00.789Z
Learning: In the llxprt-code codebase (`packages/core/src/providers/anthropic/AnthropicProvider.ts`), Anthropic's API returns `contentBlock.input` as an already-parsed JavaScript object, not a JSON string. The code was incorrectly calling `JSON.stringify(contentBlock.input)` before passing it to `processToolParameters()`, which was designed for OpenAI-style string parameters. This causes arrays and other complex types to be corrupted into strings (e.g., `paths` array becomes a string `"[\"**/*.toml\"]"` instead of actual array). The fix is to use `contentBlock.input` directly without stringifying for Anthropic provider.

Applied to files:

  • packages/core/src/services/history/IContent.ts
  • packages/cli/src/nonInteractiveCli.ts
  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
  • packages/cli/src/ui/hooks/useGeminiStream.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
📚 Learning: 2025-11-16T22:51:26.374Z
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.

Applied to files:

  • packages/cli/src/nonInteractiveCli.test.ts
  • packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts
  • packages/cli/src/nonInteractiveCli.ts
  • packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningSummary.test.ts
  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
  • packages/cli/src/ui/hooks/useGeminiStream.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
📚 Learning: 2026-01-03T17:53:10.145Z
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2026-01-03T17:53:10.145Z
Learning: In Codex OAuth mode, the synthetic read_file tool call should always target AGENTS.md (matching CODEX_SYSTEM_PROMPT expectations), but include the actual userMemory content from whatever files are configured (LLXPRT.md, AGENTS.md, or both). The synthetic output should NOT disclose which files were actually loaded - the goal is to convince GPT it already read AGENTS.md to prevent redundant reads, regardless of the actual file configuration.

Applied to files:

  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
📚 Learning: 2025-12-14T16:16:39.939Z
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2025-12-14T16:16:39.939Z
Learning: MiniMax M2 model (hf:MiniMax/MiniMax-M2) requires reasoning.stripFromContext set to "none" to preserve interleaved thinking tokens in conversation history, as dropping thinking content harms agent reliability.

Applied to files:

  • packages/core/src/providers/openai/parseResponsesStream.ts
🧬 Code graph analysis (8)
packages/cli/src/settings/ephemeralSettings.reasoningSummary.test.ts (1)
packages/cli/src/settings/ephemeralSettings.ts (2)
  • ephemeralSettingHelp (15-15)
  • isValidEphemeralSetting (70-76)
packages/cli/src/ui/hooks/useGeminiStream.thinking.test.tsx (1)
packages/cli/src/test-utils/render.tsx (1)
  • waitFor (245-263)
packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts (3)
packages/core/src/integration-tests/geminiChat-isolation.integration.test.ts (1)
  • stream (101-111)
packages/core/src/services/history/IContent.ts (1)
  • IContent (21-40)
packages/core/src/providers/openai/parseResponsesStream.ts (1)
  • parseResponsesStream (89-457)
packages/cli/src/settings/ephemeralSettings.textVerbosity.test.ts (1)
packages/cli/src/settings/ephemeralSettings.ts (2)
  • parseEphemeralSettingValue (35-61)
  • ephemeralSettingHelp (15-15)
packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningSummary.test.ts (1)
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (1)
  • OpenAIResponsesProvider (70-1052)
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (4)
packages/core/src/providers/BaseProvider.ts (1)
  • NormalizedGenerateChatOptions (63-80)
packages/core/src/providers/openai-responses/buildResponsesInputFromContent.ts (1)
  • ResponsesInputItem (27-35)
packages/core/src/services/history/IContent.ts (1)
  • ThinkingBlock (177-194)
packages/core/src/providers/openai/parseResponsesStream.ts (2)
  • ParseResponsesStreamOptions (80-87)
  • parseResponsesStream (89-457)
packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningInclude.test.ts (2)
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (1)
  • OpenAIResponsesProvider (70-1052)
packages/core/src/test-utils/providerCallOptions.ts (1)
  • createProviderCallOptions (207-257)
packages/core/src/providers/openai/parseResponsesStream.ts (1)
packages/core/src/services/history/IContent.ts (2)
  • IContent (21-40)
  • ContentBlock (93-99)
⏰ Context from checks skipped due to timeout of 270000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Test (windows-latest, 24.x)
  • GitHub Check: Test (macos-latest, 24.x)
  • GitHub Check: Test (ubuntu-latest, 24.x)
  • GitHub Check: E2E Test (macOS)
  • GitHub Check: E2E Test (Linux) - sandbox:none
  • GitHub Check: E2E Test (Linux) - sandbox:docker
  • GitHub Check: Slow E2E - Win
🔇 Additional comments (27)
packages/cli/src/nonInteractiveCli.ts (4)

280-301: LGTM! Thought buffering mechanism correctly implemented.

The STREAM_JSON corruption issue from the previous review is properly addressed by including !streamJsonOutput in the includeThinking guard (line 285). The buffer is correctly scoped per-turn, and flushThoughtBuffer properly handles empty/whitespace-only content.


315-340: LGTM! Thought buffering prevents fragmented output.

The buffering approach correctly accumulates multiple thoughts with space separators before flushing as a single <think> block. Emoji filtering is properly applied before buffering, and blocked thoughts are skipped entirely.


342-343: LGTM! Strategic flush points ensure correct output ordering.

Flushing the thought buffer before Content and ToolCallRequest events ensures reasoning appears before response text and tool invocations, matching the expected UX where thinking precedes action.

Also applies to: 385-386


427-428: LGTM! Final flush handles edge cases.

The final flushThoughtBuffer() after the event loop ensures any remaining buffered thoughts are emitted even if the stream ends without a Content or ToolCallRequest event (e.g., reasoning-only streams).

packages/cli/src/ui/hooks/useGeminiStream.thinking.test.tsx (3)

176-195: Well-structured generic mock for useStateAndRef.

The mock correctly implements the state/ref synchronization pattern with proper typing. The updater function handling (detecting function vs value) mirrors React's setState behavior appropriately.


399-455: Good test coverage for thought replacement semantics.

The controlled promise resolver pattern allows precise verification that thought state is overwritten (not appended) across successive events. The assertions correctly validate the expected replacement behavior.


507-546: Test correctly validates pending thinking block exposure during streaming.

This test aligns well with the PR objective to ensure thinking blocks are visible in pendingHistoryItems before content arrives, enabling real-time UI display of reasoning content.

packages/cli/src/ui/hooks/useGeminiStream.ts (3)

274-277: Correct thinking block cleanup after history commit.

Clearing thinkingBlocksRef.current after committing ensures thinking blocks don't accumulate across multiple tool calls within the same turn. This is the appropriate location since it occurs after addItem() captures the blocks.


969-977: Thinking blocks now surface in pending history during streaming.

This change enables real-time UI display of reasoning content by updating pendingHistoryItem as soon as thinking blocks arrive. The implementation correctly creates a new array reference for React state updates.

One consideration: when item is null/undefined, this creates a pending item with empty text but populated thinkingBlocks. This appears intentional based on the test "should expose pending thinking blocks before content arrives", but verify this doesn't cause visual artifacts in the UI when thinking blocks appear before any text content.


1078-1081: Correct dependency array update.

Adding setPendingHistoryItem to the dependency array is required since it's now used within processGeminiStreamEvents to update pending history with thinking blocks.

packages/cli/src/nonInteractiveCli.test.ts (1)

203-233: LGTM!

The test correctly sets up the ephemeral setting mock to enable thinking output, creates a stream with multiple thought events followed by content, and verifies that thoughts are coalesced into a single <think> block before the content is emitted.

packages/cli/src/settings/ephemeralSettings.reasoningSummary.test.ts (1)

1-70: Nice coverage for reasoning.summary validation and help text.

Tests are clear and exercise valid values, invalid values, and type mismatches.

packages/cli/src/settings/ephemeralSettings.textVerbosity.test.ts (1)

1-55: Good test coverage for text.verbosity parsing and help metadata.

packages/core/src/providers/openai/openaiRequestParams.ts (1)

55-86: Sanitization updates look good.

packages/cli/src/runtime/runtimeSettings.ts (1)

897-899: Good centralization of profile-persistable ephemeral keys.

packages/core/src/services/history/IContent.ts (1)

191-249: Thinking block + validation updates look solid (encrypted content included).

packages/cli/src/ui/commands/setCommand.ts (1)

316-340: Nice addition of reasoning.summary and text.verbosity to /set completions.
The new options and hints align well with the existing direct-setting schema and should improve discoverability.

packages/core/src/settings/settingsRegistry.ts (2)

228-245: Registry additions look good.
New reasoning.summary and text.verbosity entries are well-scoped and consistent with existing model-behavior settings.


1023-1058: Auto-validation for enum/boolean/number is a solid improvement.
Reduces custom validator boilerplate while keeping type constraints explicit.

packages/core/src/providers/openai/parseResponsesStream.reasoning.test.ts (1)

1-311: Great coverage for reasoning stream edge cases and ordering.
The tests exercise deltas, summaries, whitespace, and usage metadata in a clear, robust way.

packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningInclude.test.ts (1)

1-660: Test suite is thorough and well-targeted.
It validates include wiring, encrypted_content handling, and context round-tripping effectively.

packages/core/src/providers/openai-responses/__tests__/OpenAIResponsesProvider.reasoningSummary.test.ts (1)

1-367: Reasoning.summary scenarios are well covered.
Good spread of values and omission cases with clear request assertions.

packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (1)

753-761: Add explanatory comment or confirm Codex /responses API reasoning support. The code intentionally filters reasoning items only in Codex mode (lines 753–761), but provides no comment explaining whether this is due to Codex API limitations or architectural preference. If Codex /responses does support reasoning items with encrypted_content (like the standard OpenAI Responses API), this filtering may unnecessarily break context preservation on subsequent requests when reasoning.includeInContext is enabled. Either document the limitation or restore reasoning items to the Codex input if the API supports them.

packages/core/src/providers/openai/parseResponsesStream.ts (4)

8-35: Strong event typing for reasoning/summary payloads.
Clearer typing here will make the new reasoning branches safer and easier to evolve.


63-93: Helper + options wiring looks solid.
Nice encapsulation of reasoning delta concatenation and an explicit options surface.


134-163: Debug logging is useful for SSE bring‑up.
The extra reasoning logs should help validate event ordering during integration.


379-422: Fallback emission and usage metadata handling look good.
This should prevent reasoning loss on response termination while keeping usage intact.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines +762 to +810
it.skip('should accumulate multiple Thought events and flush once on content boundary', async () => {
const thoughtEvent1: ServerGeminiStreamEvent = {
type: GeminiEventType.Thought,
value: {
subject: 'First',
description: 'thought',
},
};
const thoughtEvent2: ServerGeminiStreamEvent = {
type: GeminiEventType.Thought,
value: {
subject: 'Second',
description: 'thought',
},
};
const contentEvent: ServerGeminiStreamEvent = {
type: GeminiEventType.Content,
value: 'Response text',
};
const finishedEvent: ServerGeminiStreamEvent = {
type: GeminiEventType.Finished,
value: { reason: undefined, usageMetadata: { totalTokenCount: 10 } },
};

mockGeminiClient.sendMessageStream.mockReturnValueOnce(
createStreamFromEvents([
thoughtEvent1,
thoughtEvent2,
contentEvent,
finishedEvent,
]),
);

await runNonInteractive({
config: mockConfig,
settings: mockSettings,
input: 'test query',
prompt_id: 'test-prompt-id',
});

const thinkingOutputs = processStdoutSpy.mock.calls.filter(
([output]: [string]) => output.includes('<think>'),
);

expect(thinkingOutputs).toHaveLength(1);
const thinkingText = thinkingOutputs[0][0];
expect(thinkingText).toContain('First thought');
expect(thinkingText).toContain('Second thought');
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Skipped tests are missing ephemeral setting mock for thinking output.

When these tests are unskipped, they will likely fail because getEphemeralSetting is not mocked to return true for the reasoning setting. The active test at line 204-206 demonstrates the required setup. Additionally, verify that the expected assertion format matches the implementation—the active test expects subjects concatenated ("First Second"), but this test expects "First thought" and "Second thought" which may not match how thoughtText is constructed.

🛠️ Suggested fix when unskipping
-  it.skip('should accumulate multiple Thought events and flush once on content boundary', async () => {
+  it('should accumulate multiple Thought events and flush once on content boundary', async () => {
+    mockConfig.getEphemeralSetting = vi
+      .fn<(key: string) => boolean | undefined>()
+      .mockReturnValue(true);
+
     const thoughtEvent1: ServerGeminiStreamEvent = {

Also verify the expected output format matches the implementation (e.g., "First: thought" vs "First thought").

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
it.skip('should accumulate multiple Thought events and flush once on content boundary', async () => {
const thoughtEvent1: ServerGeminiStreamEvent = {
type: GeminiEventType.Thought,
value: {
subject: 'First',
description: 'thought',
},
};
const thoughtEvent2: ServerGeminiStreamEvent = {
type: GeminiEventType.Thought,
value: {
subject: 'Second',
description: 'thought',
},
};
const contentEvent: ServerGeminiStreamEvent = {
type: GeminiEventType.Content,
value: 'Response text',
};
const finishedEvent: ServerGeminiStreamEvent = {
type: GeminiEventType.Finished,
value: { reason: undefined, usageMetadata: { totalTokenCount: 10 } },
};
mockGeminiClient.sendMessageStream.mockReturnValueOnce(
createStreamFromEvents([
thoughtEvent1,
thoughtEvent2,
contentEvent,
finishedEvent,
]),
);
await runNonInteractive({
config: mockConfig,
settings: mockSettings,
input: 'test query',
prompt_id: 'test-prompt-id',
});
const thinkingOutputs = processStdoutSpy.mock.calls.filter(
([output]: [string]) => output.includes('<think>'),
);
expect(thinkingOutputs).toHaveLength(1);
const thinkingText = thinkingOutputs[0][0];
expect(thinkingText).toContain('First thought');
expect(thinkingText).toContain('Second thought');
});
it('should accumulate multiple Thought events and flush once on content boundary', async () => {
mockConfig.getEphemeralSetting = vi
.fn<(key: string) => boolean | undefined>()
.mockReturnValue(true);
const thoughtEvent1: ServerGeminiStreamEvent = {
type: GeminiEventType.Thought,
value: {
subject: 'First',
description: 'thought',
},
};
const thoughtEvent2: ServerGeminiStreamEvent = {
type: GeminiEventType.Thought,
value: {
subject: 'Second',
description: 'thought',
},
};
const contentEvent: ServerGeminiStreamEvent = {
type: GeminiEventType.Content,
value: 'Response text',
};
const finishedEvent: ServerGeminiStreamEvent = {
type: GeminiEventType.Finished,
value: { reason: undefined, usageMetadata: { totalTokenCount: 10 } },
};
mockGeminiClient.sendMessageStream.mockReturnValueOnce(
createStreamFromEvents([
thoughtEvent1,
thoughtEvent2,
contentEvent,
finishedEvent,
]),
);
await runNonInteractive({
config: mockConfig,
settings: mockSettings,
input: 'test query',
prompt_id: 'test-prompt-id',
});
const thinkingOutputs = processStdoutSpy.mock.calls.filter(
([output]: [string]) => output.includes('<think>'),
);
expect(thinkingOutputs).toHaveLength(1);
const thinkingText = thinkingOutputs[0][0];
expect(thinkingText).toContain('First thought');
expect(thinkingText).toContain('Second thought');
});
🤖 Prompt for AI Agents
In `@packages/cli/src/nonInteractiveCli.test.ts` around lines 762 - 810, The
skipped test "should accumulate multiple Thought events and flush once on
content boundary" is missing a mock for getEphemeralSetting to enable ephemeral
reasoning output and its assertions don't match how thought subjects are
concatenated; before calling runNonInteractive mock getEphemeralSetting to
return true for the reasoning setting (same setup used in the active test) so
thinking output is produced, and update the assertions against processStdoutSpy
to match the actual thoughtText construction used by the code (e.g., check for
the concatenated subject format produced by the Thought events from
GeminiEventType.Thought rather than "First thought"/"Second thought").

Comment on lines +812 to +860
it.skip('should NOT emit pyramid-style repeated prefixes in non-interactive CLI', async () => {
const thoughtEvent1: ServerGeminiStreamEvent = {
type: GeminiEventType.Thought,
value: {
subject: 'Analyzing',
description: '',
},
};
const thoughtEvent2: ServerGeminiStreamEvent = {
type: GeminiEventType.Thought,
value: {
subject: 'request',
description: '',
},
};
const contentEvent: ServerGeminiStreamEvent = {
type: GeminiEventType.Content,
value: 'Response',
};
const finishedEvent: ServerGeminiStreamEvent = {
type: GeminiEventType.Finished,
value: { reason: undefined, usageMetadata: { totalTokenCount: 10 } },
};

mockGeminiClient.sendMessageStream.mockReturnValueOnce(
createStreamFromEvents([
thoughtEvent1,
thoughtEvent2,
contentEvent,
finishedEvent,
]),
);

await runNonInteractive({
config: mockConfig,
settings: mockSettings,
input: 'test query',
prompt_id: 'test-prompt-id',
});

const thinkingOutputs = processStdoutSpy.mock.calls.filter(
([output]: [string]) => output.includes('<think>'),
);

expect(thinkingOutputs).toHaveLength(1);
const thinkingText = thinkingOutputs[0][0];
const thoughtCount = (thinkingText.match(/Analyzing/g) || []).length;
expect(thoughtCount).toBe(1);
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Same issue: missing ephemeral setting mock.

This skipped test also needs the ephemeral setting mock when unskipped, consistent with the first skipped test and the active test at line 203.

🤖 Prompt for AI Agents
In `@packages/cli/src/nonInteractiveCli.test.ts` around lines 812 - 860, The test
"should NOT emit pyramid-style repeated prefixes in non-interactive CLI" is
missing the ephemeral setting mock; before calling runNonInteractive, add the
same ephemeral stub used in the other tests by configuring mockSettings (e.g.,
set mockSettings.ephemeral or stub
mockSettings.get('ephemeral')/mockSettings.getSetting to return the same value
used in the active test) so runNonInteractive and its use of mockSettings behave
consistently with the other tests that already include the ephemeral mock.

The test was missing the getModelBehavior function in the invocation
mock, causing a TypeError when the provider tried to call it to get
reasoning settings.
1. parseResponsesStream.ts: Use Map instead of Set for emittedThoughts
   to track whether encrypted_content has been captured. This handles
   the case where reasoning_text.done arrives before output_item.done
   with encrypted_content - we can re-emit a hidden block with the
   encrypted content for stateless/ZDR workflows.

2. OpenAIResponsesProvider.ts: Add invocation ephemerals precedence
   for reasoning.includeInContext, reasoning.includeInResponse, and
   reasoning.enabled settings. This ensures /set (ephemeral) and
   per-call overrides take effect.

3. ephemeralSettings.ts: Resolve aliases before validating in
   isValidEphemeralSetting so alias keys like 'max-tokens' work
   consistently with parseEphemeralSettingValue.
The Thought event handler was pushing every thought to thinkingBlocksRef
without checking if that exact thought text was already present. When the
same thought arrives multiple times (e.g., from different event sources
or retries), this caused duplicate display.

Added check: only push to thinkingBlocksRef if the thought text isn't
already in the array.

Fixes #922
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts`:
- Around line 745-769: The current Codex branch removes 'reasoning' items by
creating requestInputWithoutReasoning and passing that to
injectSyntheticConfigFileRead, which drops encrypted reasoning from context;
instead, keep requestInput intact (do not filter out items with
type==='reasoning') and only create a separate list for injection: compute an
itemsToInject = requestInput.filter(item => !('type' in item && item.type ===
'reasoning')) and call this.injectSyntheticConfigFileRead(itemsToInject,
options, userMemory) so the synthetic tool call/result is prepended but the
original requestInput (including reasoning items, honored when
includeReasoningInContext is true) is preserved; adjust references to
requestInputWithoutReasoning accordingly and remove the assignment that replaces
requestInput with the filtered array.

In `@packages/core/src/providers/openai/parseResponsesStream.ts`:
- Around line 80-87: The docs for
ParseResponsesStreamOptions.includeThinkingInResponse are inaccurate: update the
comment on includeThinkingInResponse to state the actual behavior — when true
the parser emits ThinkingBlock content normally, and when false the parser still
emits thinking content but as hidden/metadata-only thinking blocks (not surfaced
as visible response text) so callers know thinking is produced but marked
hidden; reference the ParseResponsesStreamOptions interface and the
includeThinkingInResponse property when making this change.
- Around line 137-165: The current SSE debug logs in parseResponsesStream.ts
expose raw event payloads (e.g., event.delta, event.text, JSON.stringify(event),
event.item.summary/content) which can leak PII; update the logging in the block
that references event, lastLoggedType and logger.debug so it never logs raw text
or full JSON. Instead log high-level metadata only: event.type,
event.item?.type, summary_index, content_index, and numeric lengths (e.g., delta
length, text length, encrypted_content length) or presence booleans for
sensitive fields; remove any JSON.stringify(event) and any slicing of
event.delta/event.text or direct content values and ensure reasoning-specific
branches log only these safe metadata fields. Ensure lastLoggedType handling
remains unchanged.
📜 Review details

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bd2fb76 and ba82d58.

📒 Files selected for processing (3)
  • packages/cli/src/settings/ephemeralSettings.ts
  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2026-01-13T19:28:00.789Z
Learning: In the llxprt-code codebase (`packages/core/src/providers/anthropic/AnthropicProvider.ts`), Anthropic's API returns `contentBlock.input` as an already-parsed JavaScript object, not a JSON string. The code was incorrectly calling `JSON.stringify(contentBlock.input)` before passing it to `processToolParameters()`, which was designed for OpenAI-style string parameters. This causes arrays and other complex types to be corrupted into strings (e.g., `paths` array becomes a string `"[\"**/*.toml\"]"` instead of actual array). The fix is to use `contentBlock.input` directly without stringifying for Anthropic provider.
📚 Learning: 2025-11-16T22:51:26.374Z
Learnt from: e2720pjk
Repo: vybestack/llxprt-code PR: 583
File: packages/core/src/providers/openai/OpenAIProvider.ts:935-959
Timestamp: 2025-11-16T22:51:26.374Z
Learning: In the llxprt-code codebase (packages/core/src/providers/openai/OpenAIProvider.ts), tools like `run_shell_command` use internal streaming only for real-time UI updates during execution, but each tool execution produces exactly ONE final `ToolResponseBlock` containing the full result. The streaming chunks are never sent to the LLM and are not converted into multiple tool messages. The OpenAI Chat Completions API requires that each tool call (tool_call_id) corresponds to exactly one message with role 'tool', so duplicate tool response detection that removes subsequent tool messages with the same tool_call_id is correct and necessary for API compliance.

Applied to files:

  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
📚 Learning: 2026-01-13T19:28:00.789Z
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2026-01-13T19:28:00.789Z
Learning: In the llxprt-code codebase (`packages/core/src/providers/anthropic/AnthropicProvider.ts`), Anthropic's API returns `contentBlock.input` as an already-parsed JavaScript object, not a JSON string. The code was incorrectly calling `JSON.stringify(contentBlock.input)` before passing it to `processToolParameters()`, which was designed for OpenAI-style string parameters. This causes arrays and other complex types to be corrupted into strings (e.g., `paths` array becomes a string `"[\"**/*.toml\"]"` instead of actual array). The fix is to use `contentBlock.input` directly without stringifying for Anthropic provider.

Applied to files:

  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
  • packages/core/src/providers/openai/parseResponsesStream.ts
📚 Learning: 2026-01-03T17:53:10.145Z
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2026-01-03T17:53:10.145Z
Learning: In Codex OAuth mode, the synthetic read_file tool call should always target AGENTS.md (matching CODEX_SYSTEM_PROMPT expectations), but include the actual userMemory content from whatever files are configured (LLXPRT.md, AGENTS.md, or both). The synthetic output should NOT disclose which files were actually loaded - the goal is to convince GPT it already read AGENTS.md to prevent redundant reads, regardless of the actual file configuration.

Applied to files:

  • packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts
📚 Learning: 2025-12-14T16:16:39.939Z
Learnt from: acoliver
Repo: vybestack/llxprt-code PR: 0
File: :0-0
Timestamp: 2025-12-14T16:16:39.939Z
Learning: MiniMax M2 model (hf:MiniMax/MiniMax-M2) requires reasoning.stripFromContext set to "none" to preserve interleaved thinking tokens in conversation history, as dropping thinking content harms agent reliability.

Applied to files:

  • packages/core/src/providers/openai/parseResponsesStream.ts
🧬 Code graph analysis (3)
packages/cli/src/settings/ephemeralSettings.ts (1)
packages/core/src/settings/settingsRegistry.ts (1)
  • resolveAlias (849-872)
packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (4)
packages/core/src/providers/BaseProvider.ts (1)
  • NormalizedGenerateChatOptions (63-80)
packages/core/src/providers/openai-responses/buildResponsesInputFromContent.ts (1)
  • ResponsesInputItem (27-35)
packages/core/src/services/history/IContent.ts (1)
  • ThinkingBlock (177-194)
packages/core/src/providers/openai/parseResponsesStream.ts (2)
  • ParseResponsesStreamOptions (80-87)
  • parseResponsesStream (89-482)
packages/core/src/providers/openai/parseResponsesStream.ts (1)
packages/core/src/services/history/IContent.ts (2)
  • IContent (21-40)
  • ContentBlock (93-99)
⏰ Context from checks skipped due to timeout of 270000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Test (ubuntu-latest, 24.x)
  • GitHub Check: Test (macos-latest, 24.x)
  • GitHub Check: Test (windows-latest, 24.x)
  • GitHub Check: E2E Test (Linux) - sandbox:none
  • GitHub Check: Slow E2E - Win
  • GitHub Check: E2E Test (Linux) - sandbox:docker
  • GitHub Check: E2E Test (macOS)
🔇 Additional comments (12)
packages/cli/src/settings/ephemeralSettings.ts (1)

62-78: LGTM!

The implementation correctly addresses the alias resolution issue from the prior review. Aliases are now resolved before validation, ensuring consistency with parseEphemeralSettingValue. The JSDoc is clear and the logic is straightforward.

packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts (7)

328-408: Solid synthetic AGENTS.md injection.
Matches the Codex behavior requirement to always target AGENTS.md while returning actual userMemory without disclosing source files.

Based on learnings: In Codex OAuth mode, the synthetic read_file tool call should always target AGENTS.md and should not disclose which files were actually loaded.


410-470: Provider tool stubs and model params look good.
Clear, explicit “no server tools” behavior and safe settings extraction.


560-611: Reasoning-in-context injection is clean.
Encrypted reasoning is preserved and gated behind includeReasoningInContext as intended.


705-725: Request override filtering is solid.
Skipping nested reasoning avoids double‑applying settings.


803-895: Reasoning settings + text.verbosity wiring is consistent.
Nice precedence handling and explicit request shaping.


941-944: Request logging now safe.
Keys‑only logging avoids sensitive payload exposure.


1035-1044: Stream options wiring looks good.
Passing includeThinkingInResponse down to the parser is the right hook.

packages/core/src/providers/openai/parseResponsesStream.ts (4)

63-75: Nice helper extraction for reasoning deltas.
Keeps the main switch readable.


199-247: Reasoning done handlers look solid.
Hidden‑when‑disabled behavior preserves history without UI leakage.


271-354: Reasoning item handling and encrypted_content preservation are good.
Re‑emitting hidden blocks when encrypted content arrives later is a solid fix.


400-466: Fallback emission and usage handling are clean.
Good guardrails for late reasoning + usage metadata.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.

Comment on lines 745 to 769
// Build request input - filter out system messages for Codex (uses instructions field instead)
let requestInput = input;
if (isCodex) {
// In Codex mode, system prompt goes in instructions field, not input array
// Only filter items that have a 'role' property (function_call/function_call_output don't)
// Cast role to string to avoid TS2367 error - ResponsesInputItem union includes both role-bearing
// and non-role-bearing items, and we need to filter only those with role='system'
requestInput = requestInput.filter(
(msg) => !('role' in msg) || (msg.role as string) !== 'system',
);

// @issue #966: Pre-inject synthetic tool call/result for config files (LLXPRT.md/AGENTS.md)
// This prevents the model from wasting tool calls re-reading files already injected.
// Note: We no longer inject a steering prompt - the system prompt is properly
// conveyed via the `instructions` field (see below).
const requestInputWithoutReasoning = requestInput.filter(
(item) => !('type' in item && item.type === 'reasoning'),
);
this.injectSyntheticConfigFileRead(
requestInputWithoutReasoning,
options,
userMemory,
);
requestInput = requestInputWithoutReasoning;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t drop reasoning input in Codex mode.
Filtering out reasoning items here prevents encrypted reasoning from being sent back in context even when includeReasoningInContext is true. That breaks round‑trip reasoning persistence. Preserve reasoning items and just prepend the synthetic call/output.

🔧 Suggested fix
-      const requestInputWithoutReasoning = requestInput.filter(
-        (item) => !('type' in item && item.type === 'reasoning'),
-      );
-      this.injectSyntheticConfigFileRead(
-        requestInputWithoutReasoning,
-        options,
-        userMemory,
-      );
-      requestInput = requestInputWithoutReasoning;
+      this.injectSyntheticConfigFileRead(requestInput, options, userMemory);
🤖 Prompt for AI Agents
In `@packages/core/src/providers/openai-responses/OpenAIResponsesProvider.ts`
around lines 745 - 769, The current Codex branch removes 'reasoning' items by
creating requestInputWithoutReasoning and passing that to
injectSyntheticConfigFileRead, which drops encrypted reasoning from context;
instead, keep requestInput intact (do not filter out items with
type==='reasoning') and only create a separate list for injection: compute an
itemsToInject = requestInput.filter(item => !('type' in item && item.type ===
'reasoning')) and call this.injectSyntheticConfigFileRead(itemsToInject,
options, userMemory) so the synthetic tool call/result is prepended but the
original requestInput (including reasoning items, honored when
includeReasoningInContext is true) is preserved; adjust references to
requestInputWithoutReasoning accordingly and remove the assignment that replaces
requestInput with the filtered array.

Comment on lines +80 to +87
export interface ParseResponsesStreamOptions {
/**
* Whether to emit ThinkingBlock content in the output stream.
* When false, reasoning content is still accumulated but not yielded.
* Defaults to true.
*/
includeThinkingInResponse?: boolean;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Doc comment doesn’t match behavior.
The option says “not yielded” when false, but the code now emits hidden thinking blocks. Update the docstring to avoid misleading callers.

✏️ Doc fix
-   * Whether to emit ThinkingBlock content in the output stream.
-   * When false, reasoning content is still accumulated but not yielded.
+   * Whether to emit ThinkingBlock content in the output stream.
+   * When false, reasoning content is still emitted but marked isHidden.
🤖 Prompt for AI Agents
In `@packages/core/src/providers/openai/parseResponsesStream.ts` around lines 80 -
87, The docs for ParseResponsesStreamOptions.includeThinkingInResponse are
inaccurate: update the comment on includeThinkingInResponse to state the actual
behavior — when true the parser emits ThinkingBlock content normally, and when
false the parser still emits thinking content but as hidden/metadata-only
thinking blocks (not surfaced as visible response text) so callers know thinking
is produced but marked hidden; reference the ParseResponsesStreamOptions
interface and the includeThinkingInResponse property when making this change.

Comment on lines +137 to +165
// SSE event visibility for debugging reasoning support.
// We log to stderr directly so it shows up in debug logs even if
// Track last logged type to avoid duplicate logs
if (event.type !== lastLoggedType) {
lastLoggedType = event.type;
}

// Debug: Log ALL events with full details
logger.debug(
() =>
`SSE event: type=${event.type}, delta="${event.delta?.slice(0, 50) ?? ''}", text="${event.text?.slice(0, 50) ?? ''}", item_type=${event.item?.type ?? 'none'}, summary_index=${event.summary_index ?? 'none'}, content_index=${event.content_index ?? 'none'}`,
);
// Extra debug for any reasoning-related events
if (
event.type.includes('reasoning') ||
event.item?.type === 'reasoning'
) {
logger.debug(
() => `REASONING SSE: ${JSON.stringify(event).slice(0, 500)}`,
);
}

// Debug: Log raw reasoning items
if (event.item?.type === 'reasoning') {
logger.debug(
() =>
`Reasoning item received: summary=${JSON.stringify(event.item?.summary)}, content=${JSON.stringify(event.item?.content)}, encrypted_content_length=${event.item?.encrypted_content?.length ?? 0}`,
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid logging raw SSE content (PII risk).
The debug logs include reasoning/text deltas and full reasoning event payloads. That can leak user data into logs. Prefer logging lengths/keys only.

🔒 Suggested redaction
-            logger.debug(
-              () =>
-                `SSE event: type=${event.type}, delta="${event.delta?.slice(0, 50) ?? ''}", text="${event.text?.slice(0, 50) ?? ''}", item_type=${event.item?.type ?? 'none'}, summary_index=${event.summary_index ?? 'none'}, content_index=${event.content_index ?? 'none'}`,
-            );
+            logger.debug(
+              () =>
+                `SSE event: type=${event.type}, delta_len=${event.delta?.length ?? 0}, text_len=${event.text?.length ?? 0}, item_type=${event.item?.type ?? 'none'}, summary_index=${event.summary_index ?? 'none'}, content_index=${event.content_index ?? 'none'}`,
+            );
...
-              logger.debug(
-                () => `REASONING SSE: ${JSON.stringify(event).slice(0, 500)}`,
-              );
+              logger.debug(
+                () =>
+                  `REASONING SSE: type=${event.type}, item_id=${event.item_id ?? 'none'}, encrypted_len=${event.item?.encrypted_content?.length ?? 0}`,
+              );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// SSE event visibility for debugging reasoning support.
// We log to stderr directly so it shows up in debug logs even if
// Track last logged type to avoid duplicate logs
if (event.type !== lastLoggedType) {
lastLoggedType = event.type;
}
// Debug: Log ALL events with full details
logger.debug(
() =>
`SSE event: type=${event.type}, delta="${event.delta?.slice(0, 50) ?? ''}", text="${event.text?.slice(0, 50) ?? ''}", item_type=${event.item?.type ?? 'none'}, summary_index=${event.summary_index ?? 'none'}, content_index=${event.content_index ?? 'none'}`,
);
// Extra debug for any reasoning-related events
if (
event.type.includes('reasoning') ||
event.item?.type === 'reasoning'
) {
logger.debug(
() => `REASONING SSE: ${JSON.stringify(event).slice(0, 500)}`,
);
}
// Debug: Log raw reasoning items
if (event.item?.type === 'reasoning') {
logger.debug(
() =>
`Reasoning item received: summary=${JSON.stringify(event.item?.summary)}, content=${JSON.stringify(event.item?.content)}, encrypted_content_length=${event.item?.encrypted_content?.length ?? 0}`,
);
}
// SSE event visibility for debugging reasoning support.
// We log to stderr directly so it shows up in debug logs even if
// Track last logged type to avoid duplicate logs
if (event.type !== lastLoggedType) {
lastLoggedType = event.type;
}
// Debug: Log ALL events with full details
logger.debug(
() =>
`SSE event: type=${event.type}, delta_len=${event.delta?.length ?? 0}, text_len=${event.text?.length ?? 0}, item_type=${event.item?.type ?? 'none'}, summary_index=${event.summary_index ?? 'none'}, content_index=${event.content_index ?? 'none'}`,
);
// Extra debug for any reasoning-related events
if (
event.type.includes('reasoning') ||
event.item?.type === 'reasoning'
) {
logger.debug(
() =>
`REASONING SSE: type=${event.type}, item_id=${event.item_id ?? 'none'}, encrypted_len=${event.item?.encrypted_content?.length ?? 0}`,
);
}
// Debug: Log raw reasoning items
if (event.item?.type === 'reasoning') {
logger.debug(
() =>
`Reasoning item received: summary=${JSON.stringify(event.item?.summary)}, content=${JSON.stringify(event.item?.content)}, encrypted_content_length=${event.item?.encrypted_content?.length ?? 0}`,
);
}
🤖 Prompt for AI Agents
In `@packages/core/src/providers/openai/parseResponsesStream.ts` around lines 137
- 165, The current SSE debug logs in parseResponsesStream.ts expose raw event
payloads (e.g., event.delta, event.text, JSON.stringify(event),
event.item.summary/content) which can leak PII; update the logging in the block
that references event, lastLoggedType and logger.debug so it never logs raw text
or full JSON. Instead log high-level metadata only: event.type,
event.item?.type, summary_index, content_index, and numeric lengths (e.g., delta
length, text length, encrypted_content length) or presence booleans for
sensitive fields; remove any JSON.stringify(event) and any slicing of
event.delta/event.text or direct content values and ensure reasoning-specific
branches log only these safe metadata fields. Ensure lastLoggedType handling
remains unchanged.

Per OpenAI docs: 'we highly recommend you pass back any reasoning items
returned with the last function call... This allows the model to continue
its reasoning process to produce better results in the most token-efficient
manner.'

Previously, reasoning items were filtered out before the synthetic config
injection and never restored. Now we:
1. Filter reasoning only for finding the injection point
2. Merge injected items back with preserved reasoning items
3. Maintain proper order: injected -> reasoning -> non-reasoning

This follows the same pattern as MiniMax M2 which requires reasoning
tokens in conversation history for agent reliability.

Fixes CodeRabbit comment about Codex dropping reasoning input items.
@acoliver acoliver merged commit ddec3b5 into main Jan 29, 2026
20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

maintainer:e2e:ok Trusted contributor; maintainer-approved E2E run required for 0.9.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants